Test Report: Docker_Linux_crio_arm64 21865

                    
                      cab6d1f65c4aa1004a9668d09bfc3b97700b5cd8:2025-11-08:42250
                    
                

Test fail (36/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.33
35 TestAddons/parallel/Registry 15.06
36 TestAddons/parallel/RegistryCreds 0.49
37 TestAddons/parallel/Ingress 145.71
38 TestAddons/parallel/InspektorGadget 6.34
39 TestAddons/parallel/MetricsServer 6.38
41 TestAddons/parallel/CSI 44.4
42 TestAddons/parallel/Headlamp 3.15
43 TestAddons/parallel/CloudSpanner 6.3
44 TestAddons/parallel/LocalPath 8.49
45 TestAddons/parallel/NvidiaDevicePlugin 6.28
46 TestAddons/parallel/Yakd 6.27
97 TestFunctional/parallel/ServiceCmdConnect 603.48
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.81
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
146 TestFunctional/parallel/ServiceCmd/DeployApp 600.82
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
153 TestFunctional/parallel/ServiceCmd/Format 0.47
154 TestFunctional/parallel/ServiceCmd/URL 0.42
191 TestJSONOutput/pause/Command 1.54
197 TestJSONOutput/unpause/Command 1.92
281 TestPause/serial/Pause 6.84
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.46
303 TestStartStop/group/old-k8s-version/serial/Pause 6.82
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.46
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.77
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.04
327 TestStartStop/group/embed-certs/serial/Pause 9.1
331 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.03
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.43
343 TestStartStop/group/newest-cni/serial/Pause 7.35
348 TestStartStop/group/no-preload/serial/Pause 6.93
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable volcano --alsologtostderr -v=1: exit status 11 (330.691773ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:36:25.720057 1035991 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:36:25.722553 1035991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:36:25.722671 1035991 out.go:374] Setting ErrFile to fd 2...
	I1108 09:36:25.722699 1035991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:36:25.723086 1035991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:36:25.723490 1035991 mustload.go:66] Loading cluster: addons-517137
	I1108 09:36:25.723980 1035991 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:36:25.724028 1035991 addons.go:607] checking whether the cluster is paused
	I1108 09:36:25.724194 1035991 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:36:25.724227 1035991 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:36:25.724821 1035991 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:36:25.756298 1035991 ssh_runner.go:195] Run: systemctl --version
	I1108 09:36:25.756352 1035991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:36:25.774800 1035991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:36:25.879026 1035991 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:36:25.879111 1035991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:36:25.908678 1035991 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:36:25.908712 1035991 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:36:25.908717 1035991 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:36:25.908721 1035991 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:36:25.908725 1035991 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:36:25.908729 1035991 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:36:25.908732 1035991 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:36:25.908736 1035991 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:36:25.908739 1035991 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:36:25.908800 1035991 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:36:25.908809 1035991 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:36:25.908813 1035991 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:36:25.908816 1035991 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:36:25.908819 1035991 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:36:25.908822 1035991 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:36:25.908831 1035991 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:36:25.908838 1035991 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:36:25.908843 1035991 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:36:25.908846 1035991 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:36:25.908849 1035991 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:36:25.908854 1035991 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:36:25.908857 1035991 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:36:25.908859 1035991 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:36:25.908862 1035991 cri.go:89] found id: ""
	I1108 09:36:25.908918 1035991 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:36:25.925286 1035991 out.go:203] 
	W1108 09:36:25.928327 1035991 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:36:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:36:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:36:25.928368 1035991 out.go:285] * 
	* 
	W1108 09:36:25.937738 1035991 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:36:25.941079 1035991 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.381578ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004132355s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003600196s
addons_test.go:392: (dbg) Run:  kubectl --context addons-517137 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-517137 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-517137 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.480248193s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable registry --alsologtostderr -v=1: exit status 11 (280.460077ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:36:51.051439 1036930 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:36:51.052186 1036930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:36:51.052225 1036930 out.go:374] Setting ErrFile to fd 2...
	I1108 09:36:51.052246 1036930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:36:51.052615 1036930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:36:51.052951 1036930 mustload.go:66] Loading cluster: addons-517137
	I1108 09:36:51.053431 1036930 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:36:51.053517 1036930 addons.go:607] checking whether the cluster is paused
	I1108 09:36:51.053663 1036930 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:36:51.053701 1036930 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:36:51.054216 1036930 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:36:51.072836 1036930 ssh_runner.go:195] Run: systemctl --version
	I1108 09:36:51.072906 1036930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:36:51.092399 1036930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:36:51.202864 1036930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:36:51.203002 1036930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:36:51.235336 1036930 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:36:51.235406 1036930 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:36:51.235428 1036930 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:36:51.235458 1036930 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:36:51.235487 1036930 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:36:51.235505 1036930 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:36:51.235523 1036930 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:36:51.235545 1036930 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:36:51.235586 1036930 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:36:51.235607 1036930 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:36:51.235624 1036930 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:36:51.235648 1036930 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:36:51.235680 1036930 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:36:51.235701 1036930 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:36:51.235723 1036930 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:36:51.235753 1036930 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:36:51.235788 1036930 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:36:51.235807 1036930 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:36:51.235836 1036930 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:36:51.235861 1036930 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:36:51.235889 1036930 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:36:51.235917 1036930 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:36:51.235941 1036930 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:36:51.235962 1036930 cri.go:89] found id: ""
	I1108 09:36:51.236043 1036930 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:36:51.253035 1036930 out.go:203] 
	W1108 09:36:51.256099 1036930 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:36:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:36:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:36:51.256126 1036930 out.go:285] * 
	* 
	W1108 09:36:51.264953 1036930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:36:51.268089 1036930 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.06s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.073777ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-517137
addons_test.go:332: (dbg) Run:  kubectl --context addons-517137 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (267.441408ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:37:24.028548 1037999 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:37:24.029832 1037999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:24.029863 1037999 out.go:374] Setting ErrFile to fd 2...
	I1108 09:37:24.029870 1037999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:24.030175 1037999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:37:24.030518 1037999 mustload.go:66] Loading cluster: addons-517137
	I1108 09:37:24.030941 1037999 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:24.030960 1037999 addons.go:607] checking whether the cluster is paused
	I1108 09:37:24.031071 1037999 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:24.031087 1037999 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:37:24.031697 1037999 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:37:24.053908 1037999 ssh_runner.go:195] Run: systemctl --version
	I1108 09:37:24.053966 1037999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:37:24.071803 1037999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:37:24.180195 1037999 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:37:24.180331 1037999 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:37:24.210200 1037999 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:37:24.210223 1037999 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:37:24.210228 1037999 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:37:24.210232 1037999 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:37:24.210235 1037999 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:37:24.210239 1037999 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:37:24.210242 1037999 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:37:24.210245 1037999 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:37:24.210268 1037999 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:37:24.210281 1037999 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:37:24.210285 1037999 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:37:24.210289 1037999 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:37:24.210310 1037999 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:37:24.210314 1037999 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:37:24.210317 1037999 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:37:24.210324 1037999 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:37:24.210330 1037999 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:37:24.210349 1037999 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:37:24.210352 1037999 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:37:24.210355 1037999 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:37:24.210361 1037999 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:37:24.210366 1037999 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:37:24.210370 1037999 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:37:24.210373 1037999 cri.go:89] found id: ""
	I1108 09:37:24.210440 1037999 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:37:24.225949 1037999 out.go:203] 
	W1108 09:37:24.228930 1037999 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:37:24.228955 1037999 out.go:285] * 
	* 
	W1108 09:37:24.237305 1037999 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:37:24.240429 1037999 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-517137 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-517137 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-517137 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [041e0ff4-affa-4aa1-a01d-d4cf92904a9b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [041e0ff4-affa-4aa1-a01d-d4cf92904a9b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003023005s
I1108 09:37:14.691637 1029234 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.111935738s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-517137 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-517137
helpers_test.go:243: (dbg) docker inspect addons-517137:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96",
	        "Created": "2025-11-08T09:33:58.811367027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1030391,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:33:58.871150487Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96/hostname",
	        "HostsPath": "/var/lib/docker/containers/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96/hosts",
	        "LogPath": "/var/lib/docker/containers/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96-json.log",
	        "Name": "/addons-517137",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-517137:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-517137",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96",
	                "LowerDir": "/var/lib/docker/overlay2/db866645afeeb5823a6aa93f3283972ce4e7dead8d77e0804159a3b125b3156f-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db866645afeeb5823a6aa93f3283972ce4e7dead8d77e0804159a3b125b3156f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db866645afeeb5823a6aa93f3283972ce4e7dead8d77e0804159a3b125b3156f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db866645afeeb5823a6aa93f3283972ce4e7dead8d77e0804159a3b125b3156f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-517137",
	                "Source": "/var/lib/docker/volumes/addons-517137/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-517137",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-517137",
	                "name.minikube.sigs.k8s.io": "addons-517137",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afa35d8208a75f9f48ae9c9a21f124fdcd31e0e3fd666d101c56c88535cccfe1",
	            "SandboxKey": "/var/run/docker/netns/afa35d8208a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34225"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34226"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34229"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34228"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-517137": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:2d:20:2c:56:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "39510444f175bd235dac69fe9d69b5513ff5ee07ecfdb89db58c965ceccc7ed9",
	                    "EndpointID": "bb8bbf71b501d5268c7c8296f8abd9ef545bbc3924a8af21a70811ebb3f77da0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-517137",
	                        "257291073ebc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-517137 -n addons-517137
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-517137 logs -n 25: (1.460534689s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-871809                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-871809 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ start   │ --download-only -p binary-mirror-870798 --alsologtostderr --binary-mirror http://127.0.0.1:37897 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-870798   │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ delete  │ -p binary-mirror-870798                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-870798   │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ addons  │ enable dashboard -p addons-517137                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ addons  │ disable dashboard -p addons-517137                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ start   │ -p addons-517137 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:36 UTC │
	│ addons  │ addons-517137 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-517137 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:36 UTC │                     │
	│ addons  │ enable headlamp -p addons-517137 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-517137 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:36 UTC │                     │
	│ ip      │ addons-517137 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:36 UTC │ 08 Nov 25 09:36 UTC │
	│ addons  │ addons-517137 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-517137 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-517137 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:37 UTC │                     │
	│ ssh     │ addons-517137 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:37 UTC │                     │
	│ addons  │ addons-517137 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:37 UTC │                     │
	│ addons  │ addons-517137 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:37 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-517137                                                                                                                                                                                                                                                                                                                                                                                           │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:37 UTC │ 08 Nov 25 09:37 UTC │
	│ addons  │ addons-517137 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:37 UTC │                     │
	│ addons  │ addons-517137 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:37 UTC │                     │
	│ addons  │ addons-517137 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:37 UTC │                     │
	│ ssh     │ addons-517137 ssh cat /opt/local-path-provisioner/pvc-975b142a-cf8a-4ec0-aa0b-29691c63b381_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:37 UTC │ 08 Nov 25 09:37 UTC │
	│ addons  │ addons-517137 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:37 UTC │                     │
	│ addons  │ addons-517137 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:37 UTC │                     │
	│ ip      │ addons-517137 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:39 UTC │ 08 Nov 25 09:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:33:32
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:33:32.909072 1029992 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:33:32.909204 1029992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:32.909215 1029992 out.go:374] Setting ErrFile to fd 2...
	I1108 09:33:32.909221 1029992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:32.909469 1029992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:33:32.909902 1029992 out.go:368] Setting JSON to false
	I1108 09:33:32.910691 1029992 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29758,"bootTime":1762564655,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 09:33:32.910756 1029992 start.go:143] virtualization:  
	I1108 09:33:32.918256 1029992 out.go:179] * [addons-517137] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 09:33:32.924121 1029992 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:33:32.924187 1029992 notify.go:221] Checking for updates...
	I1108 09:33:32.934232 1029992 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:33:32.943966 1029992 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 09:33:32.974465 1029992 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 09:33:33.007406 1029992 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 09:33:33.039355 1029992 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:33:33.073168 1029992 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:33:33.095560 1029992 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:33:33.095687 1029992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:33:33.153055 1029992 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-08 09:33:33.141786701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:33:33.153171 1029992 docker.go:319] overlay module found
	I1108 09:33:33.183872 1029992 out.go:179] * Using the docker driver based on user configuration
	I1108 09:33:33.216889 1029992 start.go:309] selected driver: docker
	I1108 09:33:33.216917 1029992 start.go:930] validating driver "docker" against <nil>
	I1108 09:33:33.216932 1029992 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:33:33.217697 1029992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:33:33.272856 1029992 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-08 09:33:33.26409427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:33:33.273017 1029992 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:33:33.273268 1029992 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:33:33.298367 1029992 out.go:179] * Using Docker driver with root privileges
	I1108 09:33:33.343779 1029992 cni.go:84] Creating CNI manager for ""
	I1108 09:33:33.343867 1029992 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:33:33.343883 1029992 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:33:33.343973 1029992 start.go:353] cluster config:
	{Name:addons-517137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-517137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1108 09:33:33.375276 1029992 out.go:179] * Starting "addons-517137" primary control-plane node in "addons-517137" cluster
	I1108 09:33:33.406936 1029992 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:33:33.439837 1029992 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:33:33.470488 1029992 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:33:33.470573 1029992 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 09:33:33.470488 1029992 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:33:33.470585 1029992 cache.go:59] Caching tarball of preloaded images
	I1108 09:33:33.470765 1029992 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 09:33:33.470774 1029992 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:33:33.471119 1029992 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/config.json ...
	I1108 09:33:33.471140 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/config.json: {Name:mk335e1c9e903d2c98e81d98ab41a753d3cbaa26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:33.487085 1029992 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:33:33.487235 1029992 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 09:33:33.487260 1029992 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1108 09:33:33.487265 1029992 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1108 09:33:33.487276 1029992 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1108 09:33:33.487287 1029992 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1108 09:33:51.969325 1029992 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1108 09:33:51.969360 1029992 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:33:51.969390 1029992 start.go:360] acquireMachinesLock for addons-517137: {Name:mka52ee401f9ddfa9995f7d13ae17ba555b99bae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:33:51.969499 1029992 start.go:364] duration metric: took 90.295µs to acquireMachinesLock for "addons-517137"
	I1108 09:33:51.969524 1029992 start.go:93] Provisioning new machine with config: &{Name:addons-517137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-517137 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:33:51.969589 1029992 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:33:51.973104 1029992 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1108 09:33:51.973360 1029992 start.go:159] libmachine.API.Create for "addons-517137" (driver="docker")
	I1108 09:33:51.973397 1029992 client.go:173] LocalClient.Create starting
	I1108 09:33:51.973515 1029992 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem
	I1108 09:33:52.095410 1029992 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem
	I1108 09:33:53.163487 1029992 cli_runner.go:164] Run: docker network inspect addons-517137 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:33:53.178819 1029992 cli_runner.go:211] docker network inspect addons-517137 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:33:53.178899 1029992 network_create.go:284] running [docker network inspect addons-517137] to gather additional debugging logs...
	I1108 09:33:53.178922 1029992 cli_runner.go:164] Run: docker network inspect addons-517137
	W1108 09:33:53.195968 1029992 cli_runner.go:211] docker network inspect addons-517137 returned with exit code 1
	I1108 09:33:53.195995 1029992 network_create.go:287] error running [docker network inspect addons-517137]: docker network inspect addons-517137: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-517137 not found
	I1108 09:33:53.196009 1029992 network_create.go:289] output of [docker network inspect addons-517137]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-517137 not found
	
	** /stderr **
	I1108 09:33:53.196112 1029992 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:33:53.212737 1029992 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d38c0}
	I1108 09:33:53.212781 1029992 network_create.go:124] attempt to create docker network addons-517137 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1108 09:33:53.212850 1029992 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-517137 addons-517137
	I1108 09:33:53.273307 1029992 network_create.go:108] docker network addons-517137 192.168.49.0/24 created
	I1108 09:33:53.273340 1029992 kic.go:121] calculated static IP "192.168.49.2" for the "addons-517137" container
	I1108 09:33:53.273436 1029992 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:33:53.289707 1029992 cli_runner.go:164] Run: docker volume create addons-517137 --label name.minikube.sigs.k8s.io=addons-517137 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:33:53.308928 1029992 oci.go:103] Successfully created a docker volume addons-517137
	I1108 09:33:53.309012 1029992 cli_runner.go:164] Run: docker run --rm --name addons-517137-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-517137 --entrypoint /usr/bin/test -v addons-517137:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:33:54.322133 1029992 cli_runner.go:217] Completed: docker run --rm --name addons-517137-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-517137 --entrypoint /usr/bin/test -v addons-517137:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (1.013079687s)
	I1108 09:33:54.322183 1029992 oci.go:107] Successfully prepared a docker volume addons-517137
	I1108 09:33:54.322206 1029992 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:33:54.322223 1029992 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:33:54.322288 1029992 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-517137:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:33:58.744275 1029992 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-517137:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.42194564s)
	I1108 09:33:58.744311 1029992 kic.go:203] duration metric: took 4.422082965s to extract preloaded images to volume ...
	W1108 09:33:58.744472 1029992 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 09:33:58.744587 1029992 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:33:58.797104 1029992 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-517137 --name addons-517137 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-517137 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-517137 --network addons-517137 --ip 192.168.49.2 --volume addons-517137:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:33:59.067321 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Running}}
	I1108 09:33:59.087573 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:33:59.110786 1029992 cli_runner.go:164] Run: docker exec addons-517137 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:33:59.176689 1029992 oci.go:144] the created container "addons-517137" has a running status.
	I1108 09:33:59.176715 1029992 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa...
	I1108 09:33:59.571905 1029992 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:33:59.591320 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:33:59.607237 1029992 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:33:59.607254 1029992 kic_runner.go:114] Args: [docker exec --privileged addons-517137 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:33:59.646010 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:33:59.662296 1029992 machine.go:94] provisionDockerMachine start ...
	I1108 09:33:59.662392 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:33:59.679640 1029992 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:59.679977 1029992 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1108 09:33:59.679996 1029992 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:33:59.680598 1029992 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 09:34:02.832656 1029992 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-517137
	
	I1108 09:34:02.832684 1029992 ubuntu.go:182] provisioning hostname "addons-517137"
	I1108 09:34:02.832747 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:02.852193 1029992 main.go:143] libmachine: Using SSH client type: native
	I1108 09:34:02.852641 1029992 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1108 09:34:02.852659 1029992 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-517137 && echo "addons-517137" | sudo tee /etc/hostname
	I1108 09:34:03.015851 1029992 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-517137
	
	I1108 09:34:03.015937 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:03.034720 1029992 main.go:143] libmachine: Using SSH client type: native
	I1108 09:34:03.035040 1029992 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1108 09:34:03.035063 1029992 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-517137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-517137/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-517137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:34:03.184976 1029992 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:34:03.185049 1029992 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 09:34:03.185074 1029992 ubuntu.go:190] setting up certificates
	I1108 09:34:03.185084 1029992 provision.go:84] configureAuth start
	I1108 09:34:03.185146 1029992 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-517137
	I1108 09:34:03.202935 1029992 provision.go:143] copyHostCerts
	I1108 09:34:03.203035 1029992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 09:34:03.203173 1029992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 09:34:03.203264 1029992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 09:34:03.203343 1029992 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.addons-517137 san=[127.0.0.1 192.168.49.2 addons-517137 localhost minikube]
	I1108 09:34:03.750105 1029992 provision.go:177] copyRemoteCerts
	I1108 09:34:03.750181 1029992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:34:03.750223 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:03.768815 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:03.876611 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:34:03.894410 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 09:34:03.911483 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 09:34:03.929442 1029992 provision.go:87] duration metric: took 744.34399ms to configureAuth
	I1108 09:34:03.929519 1029992 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:34:03.929736 1029992 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:34:03.929846 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:03.946608 1029992 main.go:143] libmachine: Using SSH client type: native
	I1108 09:34:03.946917 1029992 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1108 09:34:03.946937 1029992 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:34:04.204712 1029992 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:34:04.204737 1029992 machine.go:97] duration metric: took 4.542415609s to provisionDockerMachine
	I1108 09:34:04.204749 1029992 client.go:176] duration metric: took 12.231341523s to LocalClient.Create
	I1108 09:34:04.204762 1029992 start.go:167] duration metric: took 12.231407121s to libmachine.API.Create "addons-517137"
	I1108 09:34:04.204769 1029992 start.go:293] postStartSetup for "addons-517137" (driver="docker")
	I1108 09:34:04.204779 1029992 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:34:04.204847 1029992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:34:04.204891 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:04.223539 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:04.332983 1029992 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:34:04.336380 1029992 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:34:04.336406 1029992 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:34:04.336418 1029992 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 09:34:04.336511 1029992 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 09:34:04.336540 1029992 start.go:296] duration metric: took 131.765751ms for postStartSetup
	I1108 09:34:04.336858 1029992 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-517137
	I1108 09:34:04.353551 1029992 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/config.json ...
	I1108 09:34:04.353848 1029992 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:34:04.353898 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:04.370483 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:04.473361 1029992 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:34:04.478008 1029992 start.go:128] duration metric: took 12.508404461s to createHost
	I1108 09:34:04.478031 1029992 start.go:83] releasing machines lock for "addons-517137", held for 12.508523826s
	I1108 09:34:04.478100 1029992 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-517137
	I1108 09:34:04.494705 1029992 ssh_runner.go:195] Run: cat /version.json
	I1108 09:34:04.494731 1029992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:34:04.494760 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:04.494789 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:04.512700 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:04.514365 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:04.703191 1029992 ssh_runner.go:195] Run: systemctl --version
	I1108 09:34:04.709235 1029992 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:34:04.746835 1029992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:34:04.751093 1029992 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:34:04.751163 1029992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:34:04.780742 1029992 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 09:34:04.780768 1029992 start.go:496] detecting cgroup driver to use...
	I1108 09:34:04.780801 1029992 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 09:34:04.780853 1029992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:34:04.797719 1029992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:34:04.810268 1029992 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:34:04.810329 1029992 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:34:04.828064 1029992 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:34:04.845877 1029992 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:34:04.964854 1029992 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:34:05.106254 1029992 docker.go:234] disabling docker service ...
	I1108 09:34:05.106323 1029992 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:34:05.129065 1029992 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:34:05.143126 1029992 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:34:05.265935 1029992 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:34:05.384903 1029992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:34:05.398509 1029992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:34:05.413579 1029992 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:34:05.413657 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.422733 1029992 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 09:34:05.422812 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.431789 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.440353 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.449378 1029992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:34:05.457657 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.466504 1029992 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.479757 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.488344 1029992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:34:05.495870 1029992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:34:05.503208 1029992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:34:05.619609 1029992 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:34:05.742575 1029992 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:34:05.742719 1029992 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:34:05.746540 1029992 start.go:564] Will wait 60s for crictl version
	I1108 09:34:05.746654 1029992 ssh_runner.go:195] Run: which crictl
	I1108 09:34:05.750356 1029992 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:34:05.783474 1029992 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:34:05.783610 1029992 ssh_runner.go:195] Run: crio --version
	I1108 09:34:05.814443 1029992 ssh_runner.go:195] Run: crio --version
	I1108 09:34:05.846790 1029992 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:34:05.849832 1029992 cli_runner.go:164] Run: docker network inspect addons-517137 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:34:05.866436 1029992 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1108 09:34:05.870250 1029992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:34:05.880210 1029992 kubeadm.go:884] updating cluster {Name:addons-517137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-517137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:34:05.880324 1029992 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:34:05.880390 1029992 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:34:05.919054 1029992 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:34:05.919084 1029992 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:34:05.919155 1029992 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:34:05.943619 1029992 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:34:05.943646 1029992 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:34:05.943655 1029992 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1108 09:34:05.943756 1029992 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-517137 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-517137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:34:05.943846 1029992 ssh_runner.go:195] Run: crio config
	I1108 09:34:06.008489 1029992 cni.go:84] Creating CNI manager for ""
	I1108 09:34:06.008518 1029992 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:34:06.008551 1029992 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:34:06.008582 1029992 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-517137 NodeName:addons-517137 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:34:06.008737 1029992 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-517137"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:34:06.008823 1029992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:34:06.018358 1029992 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:34:06.018490 1029992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:34:06.027005 1029992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1108 09:34:06.041206 1029992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:34:06.056264 1029992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1108 09:34:06.069881 1029992 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:34:06.073623 1029992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:34:06.083822 1029992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:34:06.200023 1029992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:34:06.223537 1029992 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137 for IP: 192.168.49.2
	I1108 09:34:06.223556 1029992 certs.go:195] generating shared ca certs ...
	I1108 09:34:06.223571 1029992 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:06.223775 1029992 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 09:34:07.150941 1029992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt ...
	I1108 09:34:07.150971 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt: {Name:mkea0a47b63d07c9c4a4b5d0cf2668280a966698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:07.151174 1029992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key ...
	I1108 09:34:07.151194 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key: {Name:mk0561239475f2ae8f7a9724b7319a0d1d2c4d72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:07.151288 1029992 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 09:34:08.495820 1029992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt ...
	I1108 09:34:08.495851 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt: {Name:mk27f92ce91dda6a8215eb48ff9f10d8956c1f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:08.496043 1029992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key ...
	I1108 09:34:08.496060 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key: {Name:mk4e740b08be2b3d57948460919940456d8b5a0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:08.496155 1029992 certs.go:257] generating profile certs ...
	I1108 09:34:08.496223 1029992 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.key
	I1108 09:34:08.496241 1029992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt with IP's: []
	I1108 09:34:08.577578 1029992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt ...
	I1108 09:34:08.577605 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: {Name:mkc6cd9af3a8375ea817435a28926e86a1c5755f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:08.577773 1029992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.key ...
	I1108 09:34:08.577785 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.key: {Name:mka3f298034f0eb8f75532892e7a985f90e8783c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:08.577872 1029992 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key.f79cf1b5
	I1108 09:34:08.577891 1029992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt.f79cf1b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1108 09:34:09.096832 1029992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt.f79cf1b5 ...
	I1108 09:34:09.096867 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt.f79cf1b5: {Name:mk582960f4b151b73b42101173cb1c0c6f453aef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:09.097069 1029992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key.f79cf1b5 ...
	I1108 09:34:09.097083 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key.f79cf1b5: {Name:mka40fcb07ec47c39853afbe93849b08252ab5b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:09.097167 1029992 certs.go:382] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt.f79cf1b5 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt
	I1108 09:34:09.097254 1029992 certs.go:386] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key.f79cf1b5 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key
	I1108 09:34:09.097313 1029992 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.key
	I1108 09:34:09.097336 1029992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.crt with IP's: []
	I1108 09:34:09.824141 1029992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.crt ...
	I1108 09:34:09.824173 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.crt: {Name:mk77d47d458cc2c30cb8ef24936b30c34e8d441e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:09.824356 1029992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.key ...
	I1108 09:34:09.824370 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.key: {Name:mk181d8aa5d1570bf50dcdb8669d2a966f8263db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:09.824582 1029992 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:34:09.824626 1029992 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 09:34:09.824651 1029992 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:34:09.824681 1029992 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 09:34:09.825370 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:34:09.842500 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:34:09.860460 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:34:09.877073 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 09:34:09.893039 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:34:09.910084 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 09:34:09.927099 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:34:09.943691 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:34:09.961049 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:34:09.977966 1029992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:34:09.990899 1029992 ssh_runner.go:195] Run: openssl version
	I1108 09:34:09.997012 1029992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:34:10.005264 1029992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:34:10.010645 1029992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:34:10.010827 1029992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:34:10.055628 1029992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:34:10.064094 1029992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:34:10.067617 1029992 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:34:10.067675 1029992 kubeadm.go:401] StartCluster: {Name:addons-517137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-517137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:34:10.067761 1029992 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:34:10.067824 1029992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:34:10.098971 1029992 cri.go:89] found id: ""
	I1108 09:34:10.099064 1029992 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:34:10.110532 1029992 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:34:10.119540 1029992 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:34:10.119645 1029992 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:34:10.129253 1029992 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:34:10.129315 1029992 kubeadm.go:158] found existing configuration files:
	
	I1108 09:34:10.129401 1029992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:34:10.138188 1029992 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:34:10.138309 1029992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:34:10.146525 1029992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:34:10.155493 1029992 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:34:10.155686 1029992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:34:10.163208 1029992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:34:10.171289 1029992 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:34:10.171384 1029992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:34:10.178683 1029992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:34:10.186297 1029992 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:34:10.186397 1029992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:34:10.193865 1029992 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:34:10.236336 1029992 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:34:10.236852 1029992 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:34:10.258529 1029992 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:34:10.258656 1029992 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 09:34:10.258760 1029992 kubeadm.go:319] OS: Linux
	I1108 09:34:10.258872 1029992 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:34:10.258959 1029992 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 09:34:10.259046 1029992 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:34:10.259140 1029992 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:34:10.259284 1029992 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:34:10.259370 1029992 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:34:10.259452 1029992 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:34:10.259539 1029992 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:34:10.259620 1029992 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 09:34:10.321228 1029992 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:34:10.321420 1029992 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:34:10.321563 1029992 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:34:10.333485 1029992 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:34:10.339867 1029992 out.go:252]   - Generating certificates and keys ...
	I1108 09:34:10.339992 1029992 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:34:10.340074 1029992 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:34:10.989614 1029992 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:34:11.581625 1029992 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:34:11.674110 1029992 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:34:12.677368 1029992 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:34:13.348916 1029992 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:34:13.349261 1029992 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-517137 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:34:13.494759 1029992 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:34:13.495123 1029992 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-517137 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:34:13.601783 1029992 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:34:13.960205 1029992 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:34:14.596820 1029992 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:34:14.596912 1029992 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:34:15.572739 1029992 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:34:15.979389 1029992 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:34:16.257640 1029992 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:34:16.885254 1029992 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:34:17.018192 1029992 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:34:17.018935 1029992 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:34:17.021832 1029992 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:34:17.025443 1029992 out.go:252]   - Booting up control plane ...
	I1108 09:34:17.025555 1029992 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:34:17.025645 1029992 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:34:17.026433 1029992 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:34:17.041466 1029992 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:34:17.041619 1029992 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:34:17.051630 1029992 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:34:17.051755 1029992 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:34:17.051815 1029992 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:34:17.180955 1029992 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:34:17.181098 1029992 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:34:18.681509 1029992 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500839051s
	I1108 09:34:18.685085 1029992 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:34:18.685188 1029992 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1108 09:34:18.685307 1029992 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:34:18.685398 1029992 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:34:22.202819 1029992 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.516920789s
	I1108 09:34:23.135802 1029992 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.450681413s
	I1108 09:34:24.686653 1029992 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001473454s
	I1108 09:34:24.709533 1029992 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:34:24.721577 1029992 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:34:24.735763 1029992 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:34:24.736019 1029992 kubeadm.go:319] [mark-control-plane] Marking the node addons-517137 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:34:24.747586 1029992 kubeadm.go:319] [bootstrap-token] Using token: ahprr5.dno4v0t3rz7ucop8
	I1108 09:34:24.752674 1029992 out.go:252]   - Configuring RBAC rules ...
	I1108 09:34:24.752831 1029992 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:34:24.757999 1029992 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:34:24.767046 1029992 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:34:24.772158 1029992 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:34:24.780208 1029992 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:34:24.784639 1029992 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:34:25.094305 1029992 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:34:25.548524 1029992 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:34:26.093606 1029992 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:34:26.094884 1029992 kubeadm.go:319] 
	I1108 09:34:26.094961 1029992 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:34:26.094985 1029992 kubeadm.go:319] 
	I1108 09:34:26.095066 1029992 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:34:26.095071 1029992 kubeadm.go:319] 
	I1108 09:34:26.095097 1029992 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:34:26.095159 1029992 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:34:26.095223 1029992 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:34:26.095230 1029992 kubeadm.go:319] 
	I1108 09:34:26.095286 1029992 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:34:26.095291 1029992 kubeadm.go:319] 
	I1108 09:34:26.095340 1029992 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:34:26.095344 1029992 kubeadm.go:319] 
	I1108 09:34:26.095398 1029992 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:34:26.095477 1029992 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:34:26.095549 1029992 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:34:26.095553 1029992 kubeadm.go:319] 
	I1108 09:34:26.095642 1029992 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:34:26.095722 1029992 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:34:26.095726 1029992 kubeadm.go:319] 
	I1108 09:34:26.095813 1029992 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ahprr5.dno4v0t3rz7ucop8 \
	I1108 09:34:26.095921 1029992 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 \
	I1108 09:34:26.095943 1029992 kubeadm.go:319] 	--control-plane 
	I1108 09:34:26.095947 1029992 kubeadm.go:319] 
	I1108 09:34:26.096036 1029992 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:34:26.096040 1029992 kubeadm.go:319] 
	I1108 09:34:26.096126 1029992 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ahprr5.dno4v0t3rz7ucop8 \
	I1108 09:34:26.096232 1029992 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 
	I1108 09:34:26.100146 1029992 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 09:34:26.100424 1029992 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 09:34:26.100602 1029992 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:34:26.100650 1029992 cni.go:84] Creating CNI manager for ""
	I1108 09:34:26.100666 1029992 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:34:26.103865 1029992 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:34:26.106792 1029992 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:34:26.110872 1029992 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:34:26.110903 1029992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:34:26.126567 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:34:26.441973 1029992 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:34:26.442067 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:26.442133 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-517137 minikube.k8s.io/updated_at=2025_11_08T09_34_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=addons-517137 minikube.k8s.io/primary=true
	I1108 09:34:26.458809 1029992 ops.go:34] apiserver oom_adj: -16
	I1108 09:34:26.576637 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:27.076753 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:27.577063 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:28.077063 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:28.576712 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:29.077619 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:29.576694 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:30.077733 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:30.577657 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:31.076742 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:31.217295 1029992 kubeadm.go:1114] duration metric: took 4.775289541s to wait for elevateKubeSystemPrivileges
	I1108 09:34:31.217323 1029992 kubeadm.go:403] duration metric: took 21.149651168s to StartCluster
	I1108 09:34:31.217340 1029992 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:31.217455 1029992 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 09:34:31.217845 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:31.218035 1029992 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:34:31.218255 1029992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:34:31.218553 1029992 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1108 09:34:31.218658 1029992 addons.go:70] Setting yakd=true in profile "addons-517137"
	I1108 09:34:31.218672 1029992 addons.go:239] Setting addon yakd=true in "addons-517137"
	I1108 09:34:31.218694 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.219195 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.219719 1029992 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:34:31.219873 1029992 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-517137"
	I1108 09:34:31.219901 1029992 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-517137"
	I1108 09:34:31.219929 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.220076 1029992 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-517137"
	I1108 09:34:31.220134 1029992 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-517137"
	I1108 09:34:31.220171 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.220362 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.220707 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.225134 1029992 addons.go:70] Setting registry=true in profile "addons-517137"
	I1108 09:34:31.225167 1029992 addons.go:239] Setting addon registry=true in "addons-517137"
	I1108 09:34:31.225202 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.225328 1029992 addons.go:70] Setting cloud-spanner=true in profile "addons-517137"
	I1108 09:34:31.225348 1029992 addons.go:239] Setting addon cloud-spanner=true in "addons-517137"
	I1108 09:34:31.225367 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.225795 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.226238 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.227429 1029992 out.go:179] * Verifying Kubernetes components...
	I1108 09:34:31.227711 1029992 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-517137"
	I1108 09:34:31.227780 1029992 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-517137"
	I1108 09:34:31.227816 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.228192 1029992 addons.go:70] Setting registry-creds=true in profile "addons-517137"
	I1108 09:34:31.228210 1029992 addons.go:239] Setting addon registry-creds=true in "addons-517137"
	I1108 09:34:31.228234 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.228247 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.228800 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.243915 1029992 addons.go:70] Setting storage-provisioner=true in profile "addons-517137"
	I1108 09:34:31.243973 1029992 addons.go:239] Setting addon storage-provisioner=true in "addons-517137"
	I1108 09:34:31.244075 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.244681 1029992 addons.go:70] Setting default-storageclass=true in profile "addons-517137"
	I1108 09:34:31.244705 1029992 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-517137"
	I1108 09:34:31.244881 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.244975 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.258696 1029992 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-517137"
	I1108 09:34:31.258726 1029992 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-517137"
	I1108 09:34:31.259063 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.276809 1029992 addons.go:70] Setting volcano=true in profile "addons-517137"
	I1108 09:34:31.276843 1029992 addons.go:239] Setting addon volcano=true in "addons-517137"
	I1108 09:34:31.276898 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.277380 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.280128 1029992 addons.go:70] Setting gcp-auth=true in profile "addons-517137"
	I1108 09:34:31.280218 1029992 mustload.go:66] Loading cluster: addons-517137
	I1108 09:34:31.287760 1029992 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:34:31.288207 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.298686 1029992 addons.go:70] Setting volumesnapshots=true in profile "addons-517137"
	I1108 09:34:31.298716 1029992 addons.go:239] Setting addon volumesnapshots=true in "addons-517137"
	I1108 09:34:31.298753 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.299253 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.308644 1029992 addons.go:70] Setting ingress=true in profile "addons-517137"
	I1108 09:34:31.308720 1029992 addons.go:239] Setting addon ingress=true in "addons-517137"
	I1108 09:34:31.308796 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.309336 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.356705 1029992 addons.go:70] Setting ingress-dns=true in profile "addons-517137"
	I1108 09:34:31.356755 1029992 addons.go:239] Setting addon ingress-dns=true in "addons-517137"
	I1108 09:34:31.356813 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.357393 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.358200 1029992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:34:31.388269 1029992 addons.go:70] Setting inspektor-gadget=true in profile "addons-517137"
	I1108 09:34:31.388300 1029992 addons.go:239] Setting addon inspektor-gadget=true in "addons-517137"
	I1108 09:34:31.388346 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.388996 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.423940 1029992 addons.go:70] Setting metrics-server=true in profile "addons-517137"
	I1108 09:34:31.423969 1029992 addons.go:239] Setting addon metrics-server=true in "addons-517137"
	I1108 09:34:31.424012 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.424494 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.490598 1029992 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1108 09:34:31.500888 1029992 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1108 09:34:31.501174 1029992 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1108 09:34:31.501997 1029992 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1108 09:34:31.522708 1029992 addons.go:239] Setting addon default-storageclass=true in "addons-517137"
	I1108 09:34:31.525827 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.526547 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.530416 1029992 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:34:31.530491 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1108 09:34:31.530592 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.523689 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.520641 1029992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:34:31.521525 1029992 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1108 09:34:31.553895 1029992 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1108 09:34:31.554046 1029992 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:34:31.554064 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1108 09:34:31.554163 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.554798 1029992 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1108 09:34:31.554817 1029992 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1108 09:34:31.554872 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.521518 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1108 09:34:31.560755 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1108 09:34:31.566828 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	W1108 09:34:31.523779 1029992 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1108 09:34:31.572654 1029992 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:34:31.573008 1029992 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1108 09:34:31.573024 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1108 09:34:31.573091 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.577782 1029992 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1108 09:34:31.578201 1029992 out.go:179]   - Using image docker.io/registry:3.0.0
	I1108 09:34:31.578541 1029992 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:34:31.578561 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:34:31.578630 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.586731 1029992 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:34:31.586753 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1108 09:34:31.586816 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.600171 1029992 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:34:31.600201 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1108 09:34:31.600259 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.604588 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1108 09:34:31.607925 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1108 09:34:31.607950 1029992 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1108 09:34:31.608020 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.618020 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1108 09:34:31.618325 1029992 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1108 09:34:31.618347 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1108 09:34:31.618410 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.634777 1029992 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:34:31.644165 1029992 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1108 09:34:31.645535 1029992 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-517137"
	I1108 09:34:31.645573 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.645992 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.694218 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.696073 1029992 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:34:31.696091 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1108 09:34:31.696155 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.700602 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.701148 1029992 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:34:31.701370 1029992 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1108 09:34:31.703270 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1108 09:34:31.703588 1029992 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:34:31.704374 1029992 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:34:31.704460 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.711729 1029992 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 09:34:31.711753 1029992 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 09:34:31.711822 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.724256 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1108 09:34:31.726054 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.727126 1029992 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1108 09:34:31.728322 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.731322 1029992 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:34:31.731347 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1108 09:34:31.731411 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.732282 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1108 09:34:31.740146 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1108 09:34:31.743610 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1108 09:34:31.743647 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1108 09:34:31.743717 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.756384 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.830517 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.832553 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.845367 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.853297 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.870519 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.899821 1029992 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1108 09:34:31.905270 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.910158 1029992 out.go:179]   - Using image docker.io/busybox:stable
	I1108 09:34:31.910478 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.913230 1029992 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:34:31.913253 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1108 09:34:31.913323 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.918235 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.921426 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.948678 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:32.016428 1029992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:34:32.399943 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:34:32.523820 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:34:32.532211 1029992 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1108 09:34:32.532237 1029992 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1108 09:34:32.610140 1029992 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 09:34:32.610163 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1108 09:34:32.628101 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:34:32.640105 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:34:32.715121 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1108 09:34:32.719684 1029992 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1108 09:34:32.719769 1029992 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1108 09:34:32.721992 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:34:32.750533 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:34:32.759624 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:34:32.767117 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:34:32.769450 1029992 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 09:34:32.769517 1029992 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 09:34:32.772859 1029992 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1108 09:34:32.772923 1029992 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1108 09:34:32.774875 1029992 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1108 09:34:32.774935 1029992 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1108 09:34:32.777350 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:34:32.789639 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1108 09:34:32.789711 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1108 09:34:32.896848 1029992 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:34:32.896872 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1108 09:34:32.898578 1029992 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1108 09:34:32.898599 1029992 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1108 09:34:32.932923 1029992 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:34:32.932946 1029992 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 09:34:32.959325 1029992 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1108 09:34:32.959403 1029992 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1108 09:34:32.963426 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1108 09:34:32.963506 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1108 09:34:33.040408 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:34:33.064014 1029992 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1108 09:34:33.064091 1029992 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1108 09:34:33.104511 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1108 09:34:33.104589 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1108 09:34:33.169107 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:34:33.171349 1029992 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:34:33.171415 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1108 09:34:33.177127 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1108 09:34:33.177199 1029992 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1108 09:34:33.244375 1029992 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.693705969s)
	I1108 09:34:33.244463 1029992 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1108 09:34:33.244515 1029992 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.228054601s)
	I1108 09:34:33.246033 1029992 node_ready.go:35] waiting up to 6m0s for node "addons-517137" to be "Ready" ...
	I1108 09:34:33.255162 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1108 09:34:33.255255 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1108 09:34:33.350098 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:34:33.400259 1029992 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:34:33.400330 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1108 09:34:33.455033 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1108 09:34:33.455107 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1108 09:34:33.629042 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1108 09:34:33.629114 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1108 09:34:33.675171 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:34:33.750013 1029992 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-517137" context rescaled to 1 replicas
	I1108 09:34:33.881988 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1108 09:34:33.882062 1029992 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1108 09:34:34.104769 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1108 09:34:34.104839 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1108 09:34:34.339333 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1108 09:34:34.339412 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1108 09:34:34.510508 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 09:34:34.510533 1029992 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1108 09:34:34.741594 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1108 09:34:35.270467 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:35.961846 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.561827869s)
	I1108 09:34:35.961925 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.333751137s)
	I1108 09:34:35.961890 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.438045064s)
	I1108 09:34:36.622217 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.982039632s)
	I1108 09:34:36.622411 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.907216937s)
	I1108 09:34:37.390769 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.668702732s)
	I1108 09:34:37.390805 1029992 addons.go:480] Verifying addon ingress=true in "addons-517137"
	I1108 09:34:37.390983 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.640381507s)
	I1108 09:34:37.391012 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.631329168s)
	I1108 09:34:37.391026 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.623851507s)
	I1108 09:34:37.391117 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.613711871s)
	I1108 09:34:37.391145 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.350660533s)
	I1108 09:34:37.391156 1029992 addons.go:480] Verifying addon registry=true in "addons-517137"
	I1108 09:34:37.391232 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.041058857s)
	I1108 09:34:37.391443 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.71617951s)
	W1108 09:34:37.392089 1029992 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:34:37.392122 1029992 retry.go:31] will retry after 195.795653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:34:37.391459 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.222283377s)
	I1108 09:34:37.392148 1029992 addons.go:480] Verifying addon metrics-server=true in "addons-517137"
	I1108 09:34:37.394182 1029992 out.go:179] * Verifying ingress addon...
	I1108 09:34:37.396252 1029992 out.go:179] * Verifying registry addon...
	I1108 09:34:37.398169 1029992 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-517137 service yakd-dashboard -n yakd-dashboard
	
	I1108 09:34:37.399044 1029992 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1108 09:34:37.400816 1029992 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1108 09:34:37.405876 1029992 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:34:37.405899 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:37.407727 1029992 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1108 09:34:37.407748 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:37.588217 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:34:37.668545 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.926910948s)
	I1108 09:34:37.668579 1029992 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-517137"
	I1108 09:34:37.671515 1029992 out.go:179] * Verifying csi-hostpath-driver addon...
	I1108 09:34:37.675092 1029992 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1108 09:34:37.689238 1029992 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:34:37.689270 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:37.749912 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:37.905505 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:37.905902 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:38.179980 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:38.405875 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:38.406390 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:38.678491 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:38.904350 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:38.904483 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:39.180935 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:39.189135 1029992 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1108 09:34:39.189220 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:39.205929 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:39.317994 1029992 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1108 09:34:39.331942 1029992 addons.go:239] Setting addon gcp-auth=true in "addons-517137"
	I1108 09:34:39.331999 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:39.332469 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:39.349639 1029992 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1108 09:34:39.349701 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:39.367704 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:39.411830 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:39.412177 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:39.470849 1029992 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1108 09:34:39.473370 1029992 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:34:39.475851 1029992 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1108 09:34:39.475873 1029992 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1108 09:34:39.488881 1029992 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1108 09:34:39.488903 1029992 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1108 09:34:39.501883 1029992 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:34:39.501911 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1108 09:34:39.514803 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:34:39.679422 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:39.750059 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:39.904952 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:39.906127 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:40.022187 1029992 addons.go:480] Verifying addon gcp-auth=true in "addons-517137"
	I1108 09:34:40.025621 1029992 out.go:179] * Verifying gcp-auth addon...
	I1108 09:34:40.030702 1029992 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1108 09:34:40.040969 1029992 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1108 09:34:40.041036 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:40.179454 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:40.408866 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:40.409921 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:40.534195 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:40.678000 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:40.902834 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:40.903381 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:41.034539 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:41.178364 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:41.402364 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:41.404743 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:41.534354 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:41.678350 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:41.902032 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:41.903616 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:42.042236 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:42.182465 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:42.250315 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:42.405300 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:42.406107 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:42.533772 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:42.678685 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:42.903398 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:42.904597 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:43.033930 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:43.177982 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:43.405228 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:43.405933 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:43.534273 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:43.678302 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:43.902760 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:43.905755 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:44.035066 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:44.178055 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:44.408074 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:44.408751 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:44.533927 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:44.678700 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:44.749465 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:44.903066 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:44.903724 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:45.036810 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:45.180805 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:45.407499 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:45.408321 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:45.534706 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:45.678761 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:45.902361 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:45.904563 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:46.035080 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:46.178126 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:46.403863 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:46.404022 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:46.533804 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:46.678447 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:46.902888 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:46.903426 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:47.034588 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:47.178332 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:47.249446 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:47.404617 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:47.404727 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:47.533882 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:47.678537 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:47.902058 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:47.903890 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:48.034433 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:48.178144 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:48.408168 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:48.409487 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:48.534246 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:48.678437 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:48.903292 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:48.904635 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:49.034393 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:49.178416 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:49.249635 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:49.406254 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:49.406384 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:49.534830 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:49.678572 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:49.902421 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:49.904123 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:50.034547 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:50.178726 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:50.403314 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:50.404375 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:50.534321 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:50.677994 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:50.903385 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:50.903830 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:51.033994 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:51.178999 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:51.249925 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:51.406331 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:51.406470 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:51.533802 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:51.678762 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:51.903684 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:51.904059 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:52.034603 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:52.178815 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:52.408274 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:52.408857 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:52.533736 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:52.678777 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:52.903071 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:52.903886 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:53.034115 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:53.177866 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:53.409367 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:53.410220 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:53.549364 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:53.678639 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:53.749740 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:53.903396 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:53.903525 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:54.034718 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:54.179221 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:54.402300 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:54.404273 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:54.534283 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:54.678011 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:54.903447 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:54.903720 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:55.033853 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:55.178716 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:55.409199 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:55.409502 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:55.535088 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:55.677859 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:55.903020 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:55.904291 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:56.034324 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:56.178583 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:56.249126 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:56.405392 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:56.405941 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:56.535180 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:56.678103 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:56.903692 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:56.903920 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:57.034037 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:57.179155 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:57.403475 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:57.404746 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:57.534271 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:57.677985 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:57.902921 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:57.903883 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:58.038183 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:58.179088 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:58.249834 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:58.408986 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:58.409237 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:58.534008 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:58.678741 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:58.903493 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:58.903943 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:59.034057 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:59.178790 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:59.403246 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:59.404339 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:59.534972 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:59.678830 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:59.902917 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:59.903862 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:00.040819 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:00.199027 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:00.409333 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:00.411642 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:00.535077 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:00.679113 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:00.749366 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:00.902719 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:00.903387 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:01.035001 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:01.179615 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:01.404290 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:01.405037 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:01.535060 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:01.680222 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:01.902319 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:01.904386 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:02.034614 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:02.178558 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:02.403789 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:02.405959 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:02.535972 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:02.678630 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:02.902080 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:02.905162 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:03.034389 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:03.178398 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:03.249316 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:03.408504 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:03.409373 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:03.534506 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:03.678668 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:03.902763 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:03.903235 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:04.034402 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:04.178311 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:04.404106 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:04.404323 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:04.534935 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:04.678835 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:04.903914 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:04.904031 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:05.034250 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:05.178074 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:05.249635 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:05.405670 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:05.406236 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:05.534386 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:05.678602 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:05.903099 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:05.903625 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:06.034101 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:06.178888 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:06.403712 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:06.404958 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:06.534519 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:06.678137 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:06.902951 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:06.903309 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:07.034395 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:07.178535 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:07.402159 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:07.409681 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:07.534667 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:07.678451 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:07.749453 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:07.902869 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:07.904055 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:08.034507 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:08.178585 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:08.405199 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:08.405642 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:08.534609 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:08.678393 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:08.902485 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:08.904633 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:09.034558 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:09.178517 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:09.408370 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:09.408394 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:09.534060 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:09.678039 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:09.749874 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:09.903253 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:09.903722 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:10.034612 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:10.178904 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:10.404780 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:10.405483 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:10.534505 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:10.678366 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:10.903316 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:10.904194 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:11.034164 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:11.178739 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:11.402269 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:11.408268 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:11.534260 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:11.678503 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:11.902426 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:11.903865 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:12.034613 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:12.178583 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:12.249439 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:12.404570 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:12.405168 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:12.534216 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:12.678068 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:12.931236 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:12.936645 1029992 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:35:12.936667 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:13.034620 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:13.207796 1029992 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:35:13.207880 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:13.289360 1029992 node_ready.go:49] node "addons-517137" is "Ready"
	I1108 09:35:13.289439 1029992 node_ready.go:38] duration metric: took 40.043247569s for node "addons-517137" to be "Ready" ...
	I1108 09:35:13.289467 1029992 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:35:13.289546 1029992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:35:13.331304 1029992 api_server.go:72] duration metric: took 42.113241495s to wait for apiserver process to appear ...
	I1108 09:35:13.331329 1029992 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:35:13.331349 1029992 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1108 09:35:13.361555 1029992 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1108 09:35:13.365617 1029992 api_server.go:141] control plane version: v1.34.1
	I1108 09:35:13.365732 1029992 api_server.go:131] duration metric: took 34.395139ms to wait for apiserver health ...
	I1108 09:35:13.365763 1029992 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:35:13.389518 1029992 system_pods.go:59] 19 kube-system pods found
	I1108 09:35:13.389554 1029992 system_pods.go:61] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Pending
	I1108 09:35:13.389564 1029992 system_pods.go:61] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:13.389570 1029992 system_pods.go:61] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending
	I1108 09:35:13.389604 1029992 system_pods.go:61] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending
	I1108 09:35:13.389618 1029992 system_pods.go:61] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:13.389623 1029992 system_pods.go:61] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:13.389627 1029992 system_pods.go:61] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:13.389632 1029992 system_pods.go:61] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:13.389645 1029992 system_pods.go:61] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:13.389650 1029992 system_pods.go:61] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:13.389655 1029992 system_pods.go:61] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:13.389688 1029992 system_pods.go:61] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:13.389701 1029992 system_pods.go:61] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending
	I1108 09:35:13.389711 1029992 system_pods.go:61] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:13.389721 1029992 system_pods.go:61] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending
	I1108 09:35:13.389726 1029992 system_pods.go:61] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending
	I1108 09:35:13.389731 1029992 system_pods.go:61] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending
	I1108 09:35:13.389738 1029992 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:13.389774 1029992 system_pods.go:61] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:35:13.389790 1029992 system_pods.go:74] duration metric: took 24.016879ms to wait for pod list to return data ...
	I1108 09:35:13.389805 1029992 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:35:13.396894 1029992 default_sa.go:45] found service account: "default"
	I1108 09:35:13.396931 1029992 default_sa.go:55] duration metric: took 7.110797ms for default service account to be created ...
	I1108 09:35:13.396942 1029992 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:35:13.412288 1029992 system_pods.go:86] 19 kube-system pods found
	I1108 09:35:13.412322 1029992 system_pods.go:89] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Pending
	I1108 09:35:13.412332 1029992 system_pods.go:89] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:13.412364 1029992 system_pods.go:89] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending
	I1108 09:35:13.412379 1029992 system_pods.go:89] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending
	I1108 09:35:13.412384 1029992 system_pods.go:89] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:13.412388 1029992 system_pods.go:89] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:13.412393 1029992 system_pods.go:89] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:13.412398 1029992 system_pods.go:89] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:13.412411 1029992 system_pods.go:89] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:13.412447 1029992 system_pods.go:89] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:13.412454 1029992 system_pods.go:89] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:13.412461 1029992 system_pods.go:89] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:13.412465 1029992 system_pods.go:89] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending
	I1108 09:35:13.412471 1029992 system_pods.go:89] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:13.412475 1029992 system_pods.go:89] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending
	I1108 09:35:13.412480 1029992 system_pods.go:89] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending
	I1108 09:35:13.412506 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending
	I1108 09:35:13.412522 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:13.412530 1029992 system_pods.go:89] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:35:13.412545 1029992 retry.go:31] will retry after 302.131416ms: missing components: kube-dns
	I1108 09:35:13.453968 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:13.454393 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:13.542205 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:13.686915 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:13.733391 1029992 system_pods.go:86] 19 kube-system pods found
	I1108 09:35:13.733439 1029992 system_pods.go:89] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:35:13.733448 1029992 system_pods.go:89] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:13.733486 1029992 system_pods.go:89] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:35:13.733501 1029992 system_pods.go:89] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:35:13.733506 1029992 system_pods.go:89] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:13.733512 1029992 system_pods.go:89] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:13.733524 1029992 system_pods.go:89] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:13.733529 1029992 system_pods.go:89] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:13.733552 1029992 system_pods.go:89] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:13.733566 1029992 system_pods.go:89] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:13.733572 1029992 system_pods.go:89] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:13.733591 1029992 system_pods.go:89] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:13.733602 1029992 system_pods.go:89] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending
	I1108 09:35:13.733611 1029992 system_pods.go:89] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:13.733635 1029992 system_pods.go:89] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:35:13.733649 1029992 system_pods.go:89] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:35:13.733657 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:13.733663 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:13.733675 1029992 system_pods.go:89] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:35:13.733693 1029992 retry.go:31] will retry after 356.856722ms: missing components: kube-dns
	I1108 09:35:13.919462 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:13.919589 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:14.036971 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:14.097040 1029992 system_pods.go:86] 19 kube-system pods found
	I1108 09:35:14.097142 1029992 system_pods.go:89] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:35:14.097169 1029992 system_pods.go:89] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:14.097201 1029992 system_pods.go:89] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:35:14.097229 1029992 system_pods.go:89] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:35:14.097257 1029992 system_pods.go:89] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:14.097296 1029992 system_pods.go:89] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:14.097320 1029992 system_pods.go:89] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:14.097345 1029992 system_pods.go:89] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:14.097380 1029992 system_pods.go:89] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:14.097404 1029992 system_pods.go:89] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:14.097430 1029992 system_pods.go:89] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:14.097466 1029992 system_pods.go:89] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:14.097494 1029992 system_pods.go:89] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:35:14.097528 1029992 system_pods.go:89] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:14.097561 1029992 system_pods.go:89] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:35:14.097593 1029992 system_pods.go:89] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:35:14.097633 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:14.097661 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:14.097687 1029992 system_pods.go:89] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Running
	I1108 09:35:14.097733 1029992 retry.go:31] will retry after 316.225073ms: missing components: kube-dns
	I1108 09:35:14.178799 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:14.404966 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:14.405378 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:14.422118 1029992 system_pods.go:86] 19 kube-system pods found
	I1108 09:35:14.422206 1029992 system_pods.go:89] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:35:14.422239 1029992 system_pods.go:89] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:14.422261 1029992 system_pods.go:89] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:35:14.422282 1029992 system_pods.go:89] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:35:14.422306 1029992 system_pods.go:89] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:14.422337 1029992 system_pods.go:89] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:14.422360 1029992 system_pods.go:89] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:14.422384 1029992 system_pods.go:89] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:14.422420 1029992 system_pods.go:89] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:14.422445 1029992 system_pods.go:89] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:14.422469 1029992 system_pods.go:89] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:14.422503 1029992 system_pods.go:89] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:14.422532 1029992 system_pods.go:89] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:35:14.422561 1029992 system_pods.go:89] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:14.422595 1029992 system_pods.go:89] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:35:14.422621 1029992 system_pods.go:89] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:35:14.422648 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:14.422688 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:14.422708 1029992 system_pods.go:89] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Running
	I1108 09:35:14.422738 1029992 retry.go:31] will retry after 596.782291ms: missing components: kube-dns
	I1108 09:35:14.534731 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:14.679612 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:14.917972 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:14.918366 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:15.049178 1029992 system_pods.go:86] 19 kube-system pods found
	I1108 09:35:15.049218 1029992 system_pods.go:89] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Running
	I1108 09:35:15.049232 1029992 system_pods.go:89] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:15.049264 1029992 system_pods.go:89] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:35:15.049282 1029992 system_pods.go:89] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:35:15.049288 1029992 system_pods.go:89] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:15.049294 1029992 system_pods.go:89] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:15.049305 1029992 system_pods.go:89] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:15.049310 1029992 system_pods.go:89] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:15.049317 1029992 system_pods.go:89] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:15.049326 1029992 system_pods.go:89] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:15.049359 1029992 system_pods.go:89] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:15.049374 1029992 system_pods.go:89] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:15.049382 1029992 system_pods.go:89] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:35:15.049392 1029992 system_pods.go:89] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:15.049404 1029992 system_pods.go:89] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:35:15.049415 1029992 system_pods.go:89] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:35:15.049438 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:15.049453 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:15.049458 1029992 system_pods.go:89] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Running
	I1108 09:35:15.049482 1029992 system_pods.go:126] duration metric: took 1.652516878s to wait for k8s-apps to be running ...
	I1108 09:35:15.049496 1029992 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:35:15.049569 1029992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:35:15.051182 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:15.071395 1029992 system_svc.go:56] duration metric: took 21.88979ms WaitForService to wait for kubelet
	I1108 09:35:15.071425 1029992 kubeadm.go:587] duration metric: took 43.853367658s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:35:15.071443 1029992 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:35:15.083358 1029992 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 09:35:15.083394 1029992 node_conditions.go:123] node cpu capacity is 2
	I1108 09:35:15.083412 1029992 node_conditions.go:105] duration metric: took 11.960535ms to run NodePressure ...
	I1108 09:35:15.083451 1029992 start.go:242] waiting for startup goroutines ...
	I1108 09:35:15.179654 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:15.404809 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:15.405255 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:15.534432 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:15.679490 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:15.903339 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:15.904904 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:16.037316 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:16.179436 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:16.411488 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:16.411922 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:16.534337 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:16.685810 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:16.908611 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:16.909028 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:17.037398 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:17.179819 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:17.413020 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:17.413534 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:17.534930 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:17.678611 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:17.904965 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:17.905331 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:18.036141 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:18.178920 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:18.404657 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:18.409391 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:18.534826 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:18.679966 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:18.903907 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:18.904681 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:19.033898 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:19.178952 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:19.407790 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:19.410446 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:19.534740 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:19.679535 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:19.905672 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:19.906016 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:20.034120 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:20.179429 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:20.410320 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:20.410711 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:20.533868 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:20.679733 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:20.905362 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:20.905832 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:21.034058 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:21.178701 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:21.407759 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:21.408089 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:21.534506 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:21.679376 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:21.904938 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:21.905332 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:22.034994 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:22.178341 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:22.405050 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:22.405292 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:22.534961 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:22.678643 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:22.905112 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:22.905508 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:23.034049 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:23.178562 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:23.404417 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:23.404900 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:23.533990 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:23.678387 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:23.903231 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:23.903937 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:24.033784 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:24.181324 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:24.414312 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:24.419838 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:24.537005 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:24.680387 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:24.913228 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:24.913681 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:25.039697 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:25.182200 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:25.421053 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:25.421263 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:25.534818 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:25.679730 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:25.908082 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:25.908302 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:26.036690 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:26.181658 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:26.408512 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:26.408799 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:26.535935 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:26.678080 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:26.903703 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:26.905391 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:27.034866 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:27.180171 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:27.408024 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:27.408209 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:27.534294 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:27.679949 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:27.903054 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:27.904384 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:28.035672 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:28.185738 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:28.407179 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:28.407731 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:28.538181 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:28.680984 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:28.907183 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:28.907593 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:29.035578 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:29.186053 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:29.410668 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:29.411055 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:29.537137 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:29.679237 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:29.906024 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:29.906367 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:30.039083 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:30.179222 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:30.405922 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:30.408537 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:30.544846 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:30.699284 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:30.902645 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:30.904948 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:31.033842 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:31.182470 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:31.406487 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:31.406681 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:31.533772 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:31.679127 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:31.902602 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:31.905607 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:32.035592 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:32.178893 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:32.413801 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:32.417066 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:32.533988 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:32.678768 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:32.904688 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:32.906056 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:33.034861 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:33.180165 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:33.409210 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:33.411153 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:33.534487 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:33.680160 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:33.906411 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:33.906812 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:34.034909 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:34.180370 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:34.404300 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:34.404583 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:34.533943 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:34.679396 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:34.903486 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:34.903711 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:35.034760 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:35.178905 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:35.401959 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:35.404158 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:35.533956 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:35.678604 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:35.903074 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:35.904145 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:36.034886 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:36.179446 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:36.403268 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:36.413194 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:36.533766 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:36.679082 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:36.905362 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:36.905786 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:37.039240 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:37.179599 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:37.413727 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:37.413982 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:37.534116 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:37.678179 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:37.902624 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:37.904284 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:38.035007 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:38.179631 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:38.408821 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:38.409018 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:38.534673 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:38.679261 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:38.902948 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:38.905764 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:39.034534 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:39.179539 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:39.406411 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:39.407834 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:39.533670 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:39.679409 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:39.903924 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:39.905325 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:40.035173 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:40.178628 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:40.405371 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:40.405904 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:40.533929 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:40.679235 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:40.902329 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:40.904286 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:41.034961 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:41.179433 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:41.408178 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:41.408597 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:41.534269 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:41.678277 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:41.903763 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:41.904669 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:42.042354 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:42.201951 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:42.409343 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:42.409884 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:42.534314 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:42.679510 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:42.919366 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:42.919523 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:43.034740 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:43.179411 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:43.405365 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:43.405719 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:43.542090 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:43.681614 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:43.903880 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:43.905041 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:44.034701 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:44.179246 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:44.403102 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:44.412720 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:44.541275 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:44.681311 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:44.905185 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:44.905512 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:45.041314 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:45.179499 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:45.408875 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:45.409490 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:45.534537 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:45.679350 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:45.908067 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:45.910049 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:46.038278 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:46.178953 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:46.405496 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:46.406193 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:46.535066 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:46.679906 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:46.904065 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:46.905383 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:47.034539 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:47.178621 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:47.402545 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:47.404404 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:47.536426 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:47.678528 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:47.902666 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:47.904950 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:48.035959 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:48.179784 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:48.407280 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:48.407800 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:48.534117 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:48.678701 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:48.904083 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:48.905026 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:49.034179 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:49.178664 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:49.407266 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:49.407339 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:49.534205 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:49.678488 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:49.904071 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:49.905268 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:50.034552 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:50.179266 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:50.405061 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:50.405441 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:50.534284 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:50.678173 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:50.902954 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:50.904832 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:51.034200 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:51.178418 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:51.410287 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:51.412218 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:51.534534 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:51.679613 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:51.903750 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:51.903889 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:52.034609 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:52.179170 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:52.405961 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:52.406475 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:52.534928 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:52.679151 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:52.904211 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:52.905420 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:53.034302 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:53.178467 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:53.408606 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:53.409013 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:53.534891 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:53.678570 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:53.907147 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:53.907634 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:54.034688 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:54.179694 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:54.407297 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:54.407596 1029992 kapi.go:107] duration metric: took 1m17.006782407s to wait for kubernetes.io/minikube-addons=registry ...
	I1108 09:35:54.535387 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:54.679067 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:54.903755 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:55.035299 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:55.179352 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:55.406874 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:55.533783 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:55.679477 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:55.903113 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:56.034602 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:56.179762 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:56.403249 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:56.534125 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:56.679816 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:56.903442 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:57.034843 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:57.180097 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:57.411339 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:57.533967 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:57.679364 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:57.902735 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:58.033657 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:58.180213 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:58.402492 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:58.539895 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:58.679722 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:58.903794 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:59.034219 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:59.179295 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:59.410054 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:59.534510 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:59.678628 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:59.903413 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:00.097981 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:36:00.185999 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:00.441843 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:00.535879 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:36:00.679266 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:00.902567 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:01.035745 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:36:01.179271 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:01.413809 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:01.536296 1029992 kapi.go:107] duration metric: took 1m21.505595367s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1108 09:36:01.539606 1029992 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-517137 cluster.
	I1108 09:36:01.542547 1029992 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1108 09:36:01.545579 1029992 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1108 09:36:01.679008 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:01.903555 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:02.180799 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:02.403877 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:02.678882 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:02.903477 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:03.179521 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:03.407939 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:03.679104 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:03.903875 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:04.179581 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:04.412787 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:04.680901 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:04.902656 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:05.179834 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:05.405485 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:05.678769 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:05.903889 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:06.179347 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:06.411007 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:06.678263 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:06.902501 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:07.183197 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:07.403389 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:07.679123 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:07.903127 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:08.178314 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:08.402906 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:08.678567 1029992 kapi.go:107] duration metric: took 1m31.003477389s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1108 09:36:08.903187 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:09.402936 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:09.902909 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:10.403677 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:10.902196 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:11.402984 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:11.902791 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:12.402574 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:12.902360 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:13.407895 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:13.902452 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:14.408910 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:14.902577 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:15.409017 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:15.902593 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:16.412524 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:16.903387 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:17.409750 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:17.902982 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:18.403094 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:18.902399 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:19.408238 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:19.902904 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:20.403625 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:20.902742 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:21.406469 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:21.902255 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:22.408649 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:22.903561 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:23.409799 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:23.902931 1029992 kapi.go:107] duration metric: took 1m46.503885258s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1108 09:36:23.906106 1029992 out.go:179] * Enabled addons: inspektor-gadget, amd-gpu-device-plugin, default-storageclass, cloud-spanner, storage-provisioner-rancher, nvidia-device-plugin, ingress-dns, registry-creds, storage-provisioner, metrics-server, yakd, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1108 09:36:23.908988 1029992 addons.go:515] duration metric: took 1m52.690424936s for enable addons: enabled=[inspektor-gadget amd-gpu-device-plugin default-storageclass cloud-spanner storage-provisioner-rancher nvidia-device-plugin ingress-dns registry-creds storage-provisioner metrics-server yakd volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1108 09:36:23.909044 1029992 start.go:247] waiting for cluster config update ...
	I1108 09:36:23.909072 1029992 start.go:256] writing updated cluster config ...
	I1108 09:36:23.909369 1029992 ssh_runner.go:195] Run: rm -f paused
	I1108 09:36:23.913969 1029992 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:36:23.918488 1029992 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nljjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.924284 1029992 pod_ready.go:94] pod "coredns-66bc5c9577-nljjg" is "Ready"
	I1108 09:36:23.924367 1029992 pod_ready.go:86] duration metric: took 5.838111ms for pod "coredns-66bc5c9577-nljjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.926929 1029992 pod_ready.go:83] waiting for pod "etcd-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.931312 1029992 pod_ready.go:94] pod "etcd-addons-517137" is "Ready"
	I1108 09:36:23.931340 1029992 pod_ready.go:86] duration metric: took 4.382839ms for pod "etcd-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.933676 1029992 pod_ready.go:83] waiting for pod "kube-apiserver-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.938286 1029992 pod_ready.go:94] pod "kube-apiserver-addons-517137" is "Ready"
	I1108 09:36:23.938320 1029992 pod_ready.go:86] duration metric: took 4.616926ms for pod "kube-apiserver-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.941666 1029992 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:24.318988 1029992 pod_ready.go:94] pod "kube-controller-manager-addons-517137" is "Ready"
	I1108 09:36:24.319018 1029992 pod_ready.go:86] duration metric: took 377.326332ms for pod "kube-controller-manager-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:24.519553 1029992 pod_ready.go:83] waiting for pod "kube-proxy-nb7h7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:24.918788 1029992 pod_ready.go:94] pod "kube-proxy-nb7h7" is "Ready"
	I1108 09:36:24.918820 1029992 pod_ready.go:86] duration metric: took 399.237305ms for pod "kube-proxy-nb7h7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:25.119494 1029992 pod_ready.go:83] waiting for pod "kube-scheduler-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:25.518332 1029992 pod_ready.go:94] pod "kube-scheduler-addons-517137" is "Ready"
	I1108 09:36:25.518362 1029992 pod_ready.go:86] duration metric: took 398.840336ms for pod "kube-scheduler-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:25.518374 1029992 pod_ready.go:40] duration metric: took 1.604372108s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:36:25.590934 1029992 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 09:36:25.598992 1029992 out.go:179] * Done! kubectl is now configured to use "addons-517137" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:39:25 addons-517137 crio[832]: time="2025-11-08T09:39:25.61413816Z" level=info msg="Removed container 430955d9cdd47de679819060680ca38043004c436786688e384f26204783bb8b: kube-system/registry-creds-764b6fb674-d4jk2/registry-creds" id=d598b7f4-bc03-48f3-aa45-e2a48bef6043 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.332916827Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-m58ln/POD" id=72ce35f8-1da1-4758-8476-d96a541bf934 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.332987767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.344605262Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-m58ln Namespace:default ID:6894b6e3826ce8e6d398578788e5751823e9784e4688b12de08ae79f4e318ae6 UID:efc477f2-f493-4f21-b342-3e34df43d403 NetNS:/var/run/netns/a936b1bd-4900-423c-ab6f-2c39dea2a6cc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d868}] Aliases:map[]}"
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.345928977Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-m58ln to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.36651955Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-m58ln Namespace:default ID:6894b6e3826ce8e6d398578788e5751823e9784e4688b12de08ae79f4e318ae6 UID:efc477f2-f493-4f21-b342-3e34df43d403 NetNS:/var/run/netns/a936b1bd-4900-423c-ab6f-2c39dea2a6cc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d868}] Aliases:map[]}"
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.366920778Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-m58ln for CNI network kindnet (type=ptp)"
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.375714685Z" level=info msg="Ran pod sandbox 6894b6e3826ce8e6d398578788e5751823e9784e4688b12de08ae79f4e318ae6 with infra container: default/hello-world-app-5d498dc89-m58ln/POD" id=72ce35f8-1da1-4758-8476-d96a541bf934 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.377214625Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ba9e7cae-3454-4e92-9142-be1c5fe84d9e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.3775314Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=ba9e7cae-3454-4e92-9142-be1c5fe84d9e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.377631992Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=ba9e7cae-3454-4e92-9142-be1c5fe84d9e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.378633508Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=fed0aef0-0a37-4476-a713-0324c9f88e02 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.38108982Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.984681159Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=fed0aef0-0a37-4476-a713-0324c9f88e02 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.985584872Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3a652795-88ce-417f-8cd3-4e4720e32e4c name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.990637357Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c0eab670-005c-4f06-9e29-10bdeea7e28e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.999389033Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-m58ln/hello-world-app" id=33d770b5-faec-44a3-8bff-2e0afe1383f6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:39:26 addons-517137 crio[832]: time="2025-11-08T09:39:26.999509252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:39:27 addons-517137 crio[832]: time="2025-11-08T09:39:27.012791974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:39:27 addons-517137 crio[832]: time="2025-11-08T09:39:27.013067225Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ecb85cf5e1974191925960683c4462da89e5ea0ea0ddea0e10c69b3c53515142/merged/etc/passwd: no such file or directory"
	Nov 08 09:39:27 addons-517137 crio[832]: time="2025-11-08T09:39:27.013092832Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ecb85cf5e1974191925960683c4462da89e5ea0ea0ddea0e10c69b3c53515142/merged/etc/group: no such file or directory"
	Nov 08 09:39:27 addons-517137 crio[832]: time="2025-11-08T09:39:27.013536299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:39:27 addons-517137 crio[832]: time="2025-11-08T09:39:27.035653174Z" level=info msg="Created container 80dbfa1c57aced928318f47f77bf45b1ef4f07c066f30f880a5e687b83b723f0: default/hello-world-app-5d498dc89-m58ln/hello-world-app" id=33d770b5-faec-44a3-8bff-2e0afe1383f6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:39:27 addons-517137 crio[832]: time="2025-11-08T09:39:27.036560826Z" level=info msg="Starting container: 80dbfa1c57aced928318f47f77bf45b1ef4f07c066f30f880a5e687b83b723f0" id=591dbd29-ab85-4101-bd9d-e3a32b6ef70b name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:39:27 addons-517137 crio[832]: time="2025-11-08T09:39:27.042040852Z" level=info msg="Started container" PID=7235 containerID=80dbfa1c57aced928318f47f77bf45b1ef4f07c066f30f880a5e687b83b723f0 description=default/hello-world-app-5d498dc89-m58ln/hello-world-app id=591dbd29-ab85-4101-bd9d-e3a32b6ef70b name=/runtime.v1.RuntimeService/StartContainer sandboxID=6894b6e3826ce8e6d398578788e5751823e9784e4688b12de08ae79f4e318ae6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	80dbfa1c57ace       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   6894b6e3826ce       hello-world-app-5d498dc89-m58ln            default
	d8c5b884c6b24       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             2 seconds ago            Exited              registry-creds                           1                   7cdab6eeb865b       registry-creds-764b6fb674-d4jk2            kube-system
	b423b24cc5cc1       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   957deb787fa1f       nginx                                      default
	68343ec31a80d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   6d8359e0ff240       busybox                                    default
	41b018b3e05c6       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   d30459d61db3b       ingress-nginx-controller-6c8bf45fb-s4bsx   ingress-nginx
	98a7b26a816a4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	0315b8bbbc12a       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	1c9aa88510d22       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	56d6d74a9465d       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	2363b11b1cf45       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	c46777785ca95       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   8aa686a9fe0f0       gcp-auth-78565c9fb4-fzmkf                  gcp-auth
	8169f675b4caa       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   b6560cdaafce3       gadget-gsfbw                               gadget
	0c171eb6d4b83       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago            Exited              patch                                    2                   9dcc77417a820       ingress-nginx-admission-patch-h9qsg        ingress-nginx
	edcad2f498f99       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   4f900f843eea3       registry-proxy-tgh4q                       kube-system
	51409c66bfa0c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	acdbfbb4a8daa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   85b1e8408475f       ingress-nginx-admission-create-5btdn       ingress-nginx
	69eae070b3513       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   563d4dfb0fb7f       yakd-dashboard-5ff678cb9-vqp5m             yakd-dashboard
	b3af115d2fc9a       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   bb34ebfa04aa3       csi-hostpath-resizer-0                     kube-system
	fb847fc15d16f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   1cbfe7df28d37       snapshot-controller-7d9fbc56b8-xvwnx       kube-system
	d8846ff2d41c0       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   324395d3ab9f7       nvidia-device-plugin-daemonset-z6l4p       kube-system
	ef4c40782ee32       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   d3186b78538da       snapshot-controller-7d9fbc56b8-txc5m       kube-system
	0018ff01c56c1       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   9be212f7efd55       cloud-spanner-emulator-6f9fcf858b-l8bpm    default
	84e32df6b9a42       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   9d4f83a5dee0d       csi-hostpath-attacher-0                    kube-system
	081c18a6ec169       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   81cf742c3048d       metrics-server-85b7d694d7-pqhr4            kube-system
	f75f3152c1878       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   84292713e4a7c       registry-6b586f9694-hb7bs                  kube-system
	0ea29a01eb6c6       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   5b0ff61fbd99e       kube-ingress-dns-minikube                  kube-system
	3da17f7633a86       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   4414166947f9a       local-path-provisioner-648f6765c9-rcxpf    local-path-storage
	8e4aed6aef0dd       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   8651534f8bc4d       coredns-66bc5c9577-nljjg                   kube-system
	b3bfe6e8c2cd3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   e9a7987dde477       storage-provisioner                        kube-system
	1922bbd45f9e7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   0d6d223921262       kindnet-c8b5h                              kube-system
	1834bdc4c64d5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   6b97de2f178b9       kube-proxy-nb7h7                           kube-system
	eadcc549cf850       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   e668fbfea36b6       kube-scheduler-addons-517137               kube-system
	d5a319b8c02a6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   29d14f70e17c2       kube-controller-manager-addons-517137      kube-system
	e56a129d33cb1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   910188bfef9e4       etcd-addons-517137                         kube-system
	544f403d8cbc6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   4f167994af5e1       kube-apiserver-addons-517137               kube-system
	
	
	==> coredns [8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9] <==
	[INFO] 10.244.0.10:41177 - 43952 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002132364s
	[INFO] 10.244.0.10:41177 - 39124 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000119111s
	[INFO] 10.244.0.10:41177 - 5893 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000097663s
	[INFO] 10.244.0.10:50210 - 54953 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00017762s
	[INFO] 10.244.0.10:50210 - 54715 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088556s
	[INFO] 10.244.0.10:59189 - 19341 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.003610159s
	[INFO] 10.244.0.10:59189 - 19079 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.003161465s
	[INFO] 10.244.0.10:60021 - 14666 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126159s
	[INFO] 10.244.0.10:60021 - 14494 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068199s
	[INFO] 10.244.0.10:42484 - 53235 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003327499s
	[INFO] 10.244.0.10:42484 - 52774 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003011734s
	[INFO] 10.244.0.10:48937 - 39799 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000174929s
	[INFO] 10.244.0.10:48937 - 39963 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125445s
	[INFO] 10.244.0.20:55880 - 37508 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000390767s
	[INFO] 10.244.0.20:48625 - 42555 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142371s
	[INFO] 10.244.0.20:42827 - 36970 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000276818s
	[INFO] 10.244.0.20:39475 - 31757 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000526796s
	[INFO] 10.244.0.20:43685 - 65433 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015641s
	[INFO] 10.244.0.20:37741 - 3457 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152644s
	[INFO] 10.244.0.20:47036 - 38403 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002407934s
	[INFO] 10.244.0.20:36378 - 32091 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003028579s
	[INFO] 10.244.0.20:50914 - 25361 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001122102s
	[INFO] 10.244.0.20:57285 - 37934 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002396685s
	[INFO] 10.244.0.23:35455 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000183404s
	[INFO] 10.244.0.23:56164 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000159175s
	
	
	==> describe nodes <==
	Name:               addons-517137
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-517137
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=addons-517137
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_34_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-517137
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-517137"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:34:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-517137
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:39:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:39:00 +0000   Sat, 08 Nov 2025 09:34:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:39:00 +0000   Sat, 08 Nov 2025 09:34:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:39:00 +0000   Sat, 08 Nov 2025 09:34:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:39:00 +0000   Sat, 08 Nov 2025 09:35:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-517137
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                1502dec3-de48-4684-9a57-a6d5a07f5971
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  default                     cloud-spanner-emulator-6f9fcf858b-l8bpm     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  default                     hello-world-app-5d498dc89-m58ln             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-gsfbw                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  gcp-auth                    gcp-auth-78565c9fb4-fzmkf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-s4bsx    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m51s
	  kube-system                 coredns-66bc5c9577-nljjg                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m57s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpathplugin-dntzs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 etcd-addons-517137                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m3s
	  kube-system                 kindnet-c8b5h                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m57s
	  kube-system                 kube-apiserver-addons-517137                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-controller-manager-addons-517137       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-proxy-nb7h7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-scheduler-addons-517137                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 metrics-server-85b7d694d7-pqhr4             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m52s
	  kube-system                 nvidia-device-plugin-daemonset-z6l4p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 registry-6b586f9694-hb7bs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-creds-764b6fb674-d4jk2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 registry-proxy-tgh4q                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 snapshot-controller-7d9fbc56b8-txc5m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 snapshot-controller-7d9fbc56b8-xvwnx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  local-path-storage          local-path-provisioner-648f6765c9-rcxpf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vqp5m              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m55s                  kube-proxy       
	  Warning  CgroupV1                 5m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m10s (x9 over 5m10s)  kubelet          Node addons-517137 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m10s (x8 over 5m10s)  kubelet          Node addons-517137 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m10s (x7 over 5m10s)  kubelet          Node addons-517137 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m3s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m3s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m3s                   kubelet          Node addons-517137 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s                   kubelet          Node addons-517137 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m3s                   kubelet          Node addons-517137 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m59s                  node-controller  Node addons-517137 event: Registered Node addons-517137 in Controller
	  Normal   NodeReady                4m16s                  kubelet          Node addons-517137 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:13] overlayfs: idmapped layers are currently not supported
	[ +27.402772] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:18] overlayfs: idmapped layers are currently not supported
	[  +7.306773] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:20] overlayfs: idmapped layers are currently not supported
	[ +10.554062] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:21] overlayfs: idmapped layers are currently not supported
	[ +13.395960] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:23] overlayfs: idmapped layers are currently not supported
	[ +14.098822] overlayfs: idmapped layers are currently not supported
	[ +16.951080] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:24] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:27] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:28] overlayfs: idmapped layers are currently not supported
	[ +11.539282] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:30] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:32] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 8 09:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e] <==
	{"level":"warn","ts":"2025-11-08T09:34:21.497284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.530554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.567002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.596674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.646080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.667874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.736600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.739722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.768615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.807055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.828191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.885720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.907684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.929754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.968067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.997819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:22.026251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:22.069067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:22.220512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:37.981037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:37.999158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:35:00.002156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:35:00.010006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:35:00.053606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:35:00.083729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51862","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [c46777785ca951ffc280809bd38c53e1ff0698ffbd62470b9fb12cda1e4e30a1] <==
	2025/11/08 09:36:01 GCP Auth Webhook started!
	2025/11/08 09:36:26 Ready to marshal response ...
	2025/11/08 09:36:26 Ready to write response ...
	2025/11/08 09:36:26 Ready to marshal response ...
	2025/11/08 09:36:26 Ready to write response ...
	2025/11/08 09:36:26 Ready to marshal response ...
	2025/11/08 09:36:26 Ready to write response ...
	2025/11/08 09:36:46 Ready to marshal response ...
	2025/11/08 09:36:46 Ready to write response ...
	2025/11/08 09:36:50 Ready to marshal response ...
	2025/11/08 09:36:50 Ready to write response ...
	2025/11/08 09:37:04 Ready to marshal response ...
	2025/11/08 09:37:04 Ready to write response ...
	2025/11/08 09:37:15 Ready to marshal response ...
	2025/11/08 09:37:15 Ready to write response ...
	2025/11/08 09:37:37 Ready to marshal response ...
	2025/11/08 09:37:37 Ready to write response ...
	2025/11/08 09:37:37 Ready to marshal response ...
	2025/11/08 09:37:37 Ready to write response ...
	2025/11/08 09:37:44 Ready to marshal response ...
	2025/11/08 09:37:44 Ready to write response ...
	2025/11/08 09:39:25 Ready to marshal response ...
	2025/11/08 09:39:25 Ready to write response ...
	
	
	==> kernel <==
	 09:39:28 up  8:21,  0 user,  load average: 0.73, 1.82, 2.39
	Linux addons-517137 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3] <==
	I1108 09:37:22.249824       1 main.go:301] handling current node
	I1108 09:37:32.250981       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:37:32.251118       1 main.go:301] handling current node
	I1108 09:37:42.250324       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:37:42.250375       1 main.go:301] handling current node
	I1108 09:37:52.251796       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:37:52.251912       1 main.go:301] handling current node
	I1108 09:38:02.249060       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:38:02.249095       1 main.go:301] handling current node
	I1108 09:38:12.251828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:38:12.251908       1 main.go:301] handling current node
	I1108 09:38:22.251975       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:38:22.252011       1 main.go:301] handling current node
	I1108 09:38:32.256528       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:38:32.256565       1 main.go:301] handling current node
	I1108 09:38:42.251955       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:38:42.251996       1 main.go:301] handling current node
	I1108 09:38:52.249636       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:38:52.249667       1 main.go:301] handling current node
	I1108 09:39:02.250077       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:39:02.250129       1 main.go:301] handling current node
	I1108 09:39:12.249069       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:39:12.249196       1 main.go:301] handling current node
	I1108 09:39:22.249175       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:39:22.249209       1 main.go:301] handling current node
	
	
	==> kube-apiserver [544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13] <==
	E1108 09:35:12.891892       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.96.52:443: connect: connection refused" logger="UnhandledError"
	W1108 09:35:12.970246       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.96.52:443: connect: connection refused
	E1108 09:35:12.971002       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.96.52:443: connect: connection refused" logger="UnhandledError"
	E1108 09:35:30.597242       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.211.77:443: connect: connection refused" logger="UnhandledError"
	W1108 09:35:30.597779       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 09:35:30.597891       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1108 09:35:30.598801       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.211.77:443: connect: connection refused" logger="UnhandledError"
	E1108 09:35:30.646895       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1: 403" logger="UnhandledError"
	W1108 09:35:30.646909       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 09:35:30.647406       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1108 09:35:30.688266       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 09:35:30.701207       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1108 09:36:35.553882       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38472: use of closed network connection
	E1108 09:36:35.821666       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38506: use of closed network connection
	E1108 09:36:35.949988       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38532: use of closed network connection
	I1108 09:37:01.947698       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1108 09:37:04.363430       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1108 09:37:04.679602       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.19.142"}
	I1108 09:39:26.180308       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.184.8"}
	
	
	==> kube-controller-manager [d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf] <==
	I1108 09:34:30.029073       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:34:30.029105       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:34:30.029137       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:34:30.024109       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 09:34:30.024129       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:34:30.024140       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:34:30.024169       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:34:30.024186       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:34:30.024313       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:34:30.027799       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:34:30.027832       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:34:30.034729       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:34:30.038142       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:34:30.052068       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-517137" podCIDRs=["10.244.0.0/24"]
	E1108 09:34:36.360070       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1108 09:34:59.983931       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 09:34:59.984085       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1108 09:34:59.984139       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1108 09:35:00.020255       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1108 09:35:00.030149       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1108 09:35:00.088565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:35:00.239979       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:35:15.001758       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1108 09:35:30.096164       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 09:35:30.257510       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5] <==
	I1108 09:34:31.827307       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:34:32.050136       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:34:32.151131       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:34:32.151167       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 09:34:32.151265       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:34:32.314149       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:34:32.314200       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:34:32.321438       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:34:32.321873       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:34:32.321889       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:34:32.323462       1 config.go:200] "Starting service config controller"
	I1108 09:34:32.323472       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:34:32.323487       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:34:32.323492       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:34:32.323508       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:34:32.323512       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:34:32.328282       1 config.go:309] "Starting node config controller"
	I1108 09:34:32.328304       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:34:32.328313       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:34:32.424990       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:34:32.425063       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:34:32.425338       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7] <==
	E1108 09:34:23.145187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:34:23.145248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:34:23.145310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:34:23.145537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1108 09:34:23.148324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:34:23.148459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:34:23.148621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:34:23.148719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:34:23.148798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:34:23.153195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:34:23.153324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:34:23.153367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:34:23.153449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:34:23.153504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:34:23.153543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:34:23.995767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1108 09:34:24.056733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:34:24.085876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:34:24.112220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:34:24.124765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:34:24.203540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:34:24.215769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:34:24.225596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:34:24.294033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1108 09:34:25.923905       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:37:47 addons-517137 kubelet[1286]: I1108 09:37:47.909911    1286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f66e15c846e7e0bcada6da260ac2ff1fb6777aee187cc73339ea82b38c4f84ef"
	Nov 08 09:37:47 addons-517137 kubelet[1286]: E1108 09:37:47.911841    1286 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-975b142a-cf8a-4ec0-aa0b-29691c63b381\" is forbidden: User \"system:node:addons-517137\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-517137' and this object" podUID="cca82872-4da5-4b0f-b95a-9a593322fae5" pod="local-path-storage/helper-pod-delete-pvc-975b142a-cf8a-4ec0-aa0b-29691c63b381"
	Nov 08 09:37:49 addons-517137 kubelet[1286]: I1108 09:37:49.455212    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cca82872-4da5-4b0f-b95a-9a593322fae5" path="/var/lib/kubelet/pods/cca82872-4da5-4b0f-b95a-9a593322fae5/volumes"
	Nov 08 09:38:15 addons-517137 kubelet[1286]: I1108 09:38:15.453458    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-hb7bs" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:38:24 addons-517137 kubelet[1286]: I1108 09:38:24.451767    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-z6l4p" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:38:25 addons-517137 kubelet[1286]: I1108 09:38:25.525734    1286 scope.go:117] "RemoveContainer" containerID="370f8603cb05d614ae41586038c9b5a94564f564b69c251c037876e8423e4bb5"
	Nov 08 09:38:25 addons-517137 kubelet[1286]: I1108 09:38:25.540146    1286 scope.go:117] "RemoveContainer" containerID="2ca5760dbadf1c031378ab65efc8449b6fca7dc62756ab20f30bcf71142a1249"
	Nov 08 09:38:25 addons-517137 kubelet[1286]: I1108 09:38:25.553566    1286 scope.go:117] "RemoveContainer" containerID="e73a6520de2d9ede92b42e285f8a67df6281efc9c33672fd28efaf656eed5ea9"
	Nov 08 09:38:45 addons-517137 kubelet[1286]: I1108 09:38:45.454725    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tgh4q" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:39:23 addons-517137 kubelet[1286]: I1108 09:39:23.153363    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d4jk2" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:39:25 addons-517137 kubelet[1286]: I1108 09:39:25.260677    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d4jk2" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:39:25 addons-517137 kubelet[1286]: I1108 09:39:25.260730    1286 scope.go:117] "RemoveContainer" containerID="430955d9cdd47de679819060680ca38043004c436786688e384f26204783bb8b"
	Nov 08 09:39:25 addons-517137 kubelet[1286]: I1108 09:39:25.585661    1286 scope.go:117] "RemoveContainer" containerID="430955d9cdd47de679819060680ca38043004c436786688e384f26204783bb8b"
	Nov 08 09:39:25 addons-517137 kubelet[1286]: E1108 09:39:25.614971    1286 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a7e733eac471ebf97a962b5603ac4af6bfa275dc12df704fb34bba20ad84f6e8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a7e733eac471ebf97a962b5603ac4af6bfa275dc12df704fb34bba20ad84f6e8/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_registry-creds-764b6fb674-d4jk2_15864f38-1975-41af-a124-d2add8a860bf/registry-creds/0.log" to get inode usage: stat /var/log/pods/kube-system_registry-creds-764b6fb674-d4jk2_15864f38-1975-41af-a124-d2add8a860bf/registry-creds/0.log: no such file or directory
	Nov 08 09:39:25 addons-517137 kubelet[1286]: E1108 09:39:25.617776    1286 manager.go:1116] Failed to create existing container: /crio/crio-430955d9cdd47de679819060680ca38043004c436786688e384f26204783bb8b: Error finding container 430955d9cdd47de679819060680ca38043004c436786688e384f26204783bb8b: Status 404 returned error can't find the container with id 430955d9cdd47de679819060680ca38043004c436786688e384f26204783bb8b
	Nov 08 09:39:26 addons-517137 kubelet[1286]: I1108 09:39:26.040983    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5drkf\" (UniqueName: \"kubernetes.io/projected/efc477f2-f493-4f21-b342-3e34df43d403-kube-api-access-5drkf\") pod \"hello-world-app-5d498dc89-m58ln\" (UID: \"efc477f2-f493-4f21-b342-3e34df43d403\") " pod="default/hello-world-app-5d498dc89-m58ln"
	Nov 08 09:39:26 addons-517137 kubelet[1286]: I1108 09:39:26.041199    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/efc477f2-f493-4f21-b342-3e34df43d403-gcp-creds\") pod \"hello-world-app-5d498dc89-m58ln\" (UID: \"efc477f2-f493-4f21-b342-3e34df43d403\") " pod="default/hello-world-app-5d498dc89-m58ln"
	Nov 08 09:39:26 addons-517137 kubelet[1286]: I1108 09:39:26.266615    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d4jk2" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:39:26 addons-517137 kubelet[1286]: I1108 09:39:26.267267    1286 scope.go:117] "RemoveContainer" containerID="d8c5b884c6b2417ed13af9db742ab8f7016daff8c7ed38043d210302cf4e20f0"
	Nov 08 09:39:26 addons-517137 kubelet[1286]: E1108 09:39:26.268015    1286 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-d4jk2_kube-system(15864f38-1975-41af-a124-d2add8a860bf)\"" pod="kube-system/registry-creds-764b6fb674-d4jk2" podUID="15864f38-1975-41af-a124-d2add8a860bf"
	Nov 08 09:39:26 addons-517137 kubelet[1286]: W1108 09:39:26.375648    1286 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96/crio-6894b6e3826ce8e6d398578788e5751823e9784e4688b12de08ae79f4e318ae6 WatchSource:0}: Error finding container 6894b6e3826ce8e6d398578788e5751823e9784e4688b12de08ae79f4e318ae6: Status 404 returned error can't find the container with id 6894b6e3826ce8e6d398578788e5751823e9784e4688b12de08ae79f4e318ae6
	Nov 08 09:39:26 addons-517137 kubelet[1286]: E1108 09:39:26.722052    1286 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96/crio/crio-430955d9cdd47de679819060680ca38043004c436786688e384f26204783bb8b\": RecentStats: unable to find data in memory cache]"
	Nov 08 09:39:27 addons-517137 kubelet[1286]: I1108 09:39:27.272425    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d4jk2" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:39:27 addons-517137 kubelet[1286]: I1108 09:39:27.272559    1286 scope.go:117] "RemoveContainer" containerID="d8c5b884c6b2417ed13af9db742ab8f7016daff8c7ed38043d210302cf4e20f0"
	Nov 08 09:39:27 addons-517137 kubelet[1286]: E1108 09:39:27.272744    1286 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-d4jk2_kube-system(15864f38-1975-41af-a124-d2add8a860bf)\"" pod="kube-system/registry-creds-764b6fb674-d4jk2" podUID="15864f38-1975-41af-a124-d2add8a860bf"
	
	
	==> storage-provisioner [b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc] <==
	W1108 09:39:03.121993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:05.125545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:05.130816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:07.133527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:07.138255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:09.141851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:09.147466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:11.151108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:11.155781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:13.158426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:13.165114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:15.169481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:15.173940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:17.177294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:17.184131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:19.187305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:19.191663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:21.194722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:21.200702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:23.207027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:23.211839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:25.217468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:25.225873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:27.229072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:39:27.234187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-517137 -n addons-517137
helpers_test.go:269: (dbg) Run:  kubectl --context addons-517137 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-5btdn ingress-nginx-admission-patch-h9qsg
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-517137 describe pod ingress-nginx-admission-create-5btdn ingress-nginx-admission-patch-h9qsg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-517137 describe pod ingress-nginx-admission-create-5btdn ingress-nginx-admission-patch-h9qsg: exit status 1 (79.34497ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5btdn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h9qsg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-517137 describe pod ingress-nginx-admission-create-5btdn ingress-nginx-admission-patch-h9qsg: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (288.476743ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:39:29.169599 1039580 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:39:29.171231 1039580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:39:29.171252 1039580 out.go:374] Setting ErrFile to fd 2...
	I1108 09:39:29.171259 1039580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:39:29.171603 1039580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:39:29.171972 1039580 mustload.go:66] Loading cluster: addons-517137
	I1108 09:39:29.172466 1039580 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:39:29.172497 1039580 addons.go:607] checking whether the cluster is paused
	I1108 09:39:29.172649 1039580 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:39:29.172668 1039580 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:39:29.173151 1039580 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:39:29.189921 1039580 ssh_runner.go:195] Run: systemctl --version
	I1108 09:39:29.189980 1039580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:39:29.208881 1039580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:39:29.315036 1039580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:39:29.315128 1039580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:39:29.365176 1039580 cri.go:89] found id: "d8c5b884c6b2417ed13af9db742ab8f7016daff8c7ed38043d210302cf4e20f0"
	I1108 09:39:29.365200 1039580 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:39:29.365205 1039580 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:39:29.365214 1039580 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:39:29.365218 1039580 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:39:29.365222 1039580 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:39:29.365225 1039580 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:39:29.365228 1039580 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:39:29.365231 1039580 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:39:29.365237 1039580 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:39:29.365241 1039580 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:39:29.365244 1039580 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:39:29.365248 1039580 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:39:29.365251 1039580 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:39:29.365254 1039580 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:39:29.365259 1039580 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:39:29.365267 1039580 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:39:29.365271 1039580 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:39:29.365274 1039580 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:39:29.365277 1039580 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:39:29.365282 1039580 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:39:29.365289 1039580 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:39:29.365292 1039580 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:39:29.365295 1039580 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:39:29.365299 1039580 cri.go:89] found id: ""
	I1108 09:39:29.365354 1039580 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:39:29.383384 1039580 out.go:203] 
	W1108 09:39:29.386269 1039580 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:39:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:39:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:39:29.386298 1039580 out.go:285] * 
	* 
	W1108 09:39:29.396312 1039580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:39:29.399490 1039580 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable ingress --alsologtostderr -v=1: exit status 11 (291.859225ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:39:29.480736 1039625 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:39:29.481552 1039625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:39:29.481567 1039625 out.go:374] Setting ErrFile to fd 2...
	I1108 09:39:29.481573 1039625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:39:29.481943 1039625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:39:29.482283 1039625 mustload.go:66] Loading cluster: addons-517137
	I1108 09:39:29.482909 1039625 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:39:29.482928 1039625 addons.go:607] checking whether the cluster is paused
	I1108 09:39:29.483054 1039625 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:39:29.483072 1039625 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:39:29.483743 1039625 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:39:29.500957 1039625 ssh_runner.go:195] Run: systemctl --version
	I1108 09:39:29.501017 1039625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:39:29.520684 1039625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:39:29.630906 1039625 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:39:29.631038 1039625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:39:29.661476 1039625 cri.go:89] found id: "d8c5b884c6b2417ed13af9db742ab8f7016daff8c7ed38043d210302cf4e20f0"
	I1108 09:39:29.661510 1039625 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:39:29.661516 1039625 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:39:29.661520 1039625 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:39:29.661524 1039625 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:39:29.661528 1039625 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:39:29.661531 1039625 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:39:29.661535 1039625 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:39:29.661539 1039625 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:39:29.661546 1039625 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:39:29.661549 1039625 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:39:29.661553 1039625 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:39:29.661556 1039625 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:39:29.661560 1039625 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:39:29.661564 1039625 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:39:29.661569 1039625 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:39:29.661572 1039625 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:39:29.661577 1039625 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:39:29.661580 1039625 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:39:29.661583 1039625 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:39:29.661589 1039625 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:39:29.661596 1039625 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:39:29.661599 1039625 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:39:29.661602 1039625 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:39:29.661605 1039625 cri.go:89] found id: ""
	I1108 09:39:29.661660 1039625 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:39:29.676557 1039625 out.go:203] 
	W1108 09:39:29.679366 1039625 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:39:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:39:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:39:29.679396 1039625 out.go:285] * 
	* 
	W1108 09:39:29.687566 1039625 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:39:29.690668 1039625 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.34s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-gsfbw" [83d670d1-690a-40bb-8db3-431e9d0645d9] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003165276s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (336.649274ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:37:03.704097 1037254 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:37:03.704888 1037254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:03.704931 1037254 out.go:374] Setting ErrFile to fd 2...
	I1108 09:37:03.704953 1037254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:03.705251 1037254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:37:03.705592 1037254 mustload.go:66] Loading cluster: addons-517137
	I1108 09:37:03.706037 1037254 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:03.706083 1037254 addons.go:607] checking whether the cluster is paused
	I1108 09:37:03.706224 1037254 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:03.706261 1037254 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:37:03.706731 1037254 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:37:03.729169 1037254 ssh_runner.go:195] Run: systemctl --version
	I1108 09:37:03.729223 1037254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:37:03.770672 1037254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:37:03.882639 1037254 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:37:03.882729 1037254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:37:03.922619 1037254 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:37:03.922638 1037254 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:37:03.922642 1037254 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:37:03.922646 1037254 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:37:03.922651 1037254 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:37:03.922658 1037254 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:37:03.922662 1037254 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:37:03.922665 1037254 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:37:03.922668 1037254 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:37:03.922674 1037254 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:37:03.922678 1037254 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:37:03.922681 1037254 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:37:03.922684 1037254 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:37:03.922687 1037254 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:37:03.922690 1037254 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:37:03.922695 1037254 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:37:03.922698 1037254 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:37:03.922705 1037254 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:37:03.922708 1037254 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:37:03.922711 1037254 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:37:03.922715 1037254 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:37:03.922718 1037254 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:37:03.922721 1037254 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:37:03.922724 1037254 cri.go:89] found id: ""
	I1108 09:37:03.922772 1037254 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:37:03.957079 1037254 out.go:203] 
	W1108 09:37:03.960316 1037254 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:37:03.960338 1037254 out.go:285] * 
	* 
	W1108 09:37:03.974784 1037254 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:37:03.978180 1037254 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.414948ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003080503s
addons_test.go:463: (dbg) Run:  kubectl --context addons-517137 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (277.038308ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:36:57.431795 1037153 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:36:57.433172 1037153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:36:57.433190 1037153 out.go:374] Setting ErrFile to fd 2...
	I1108 09:36:57.433196 1037153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:36:57.433579 1037153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:36:57.433939 1037153 mustload.go:66] Loading cluster: addons-517137
	I1108 09:36:57.434579 1037153 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:36:57.434600 1037153 addons.go:607] checking whether the cluster is paused
	I1108 09:36:57.434731 1037153 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:36:57.434748 1037153 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:36:57.435532 1037153 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:36:57.456383 1037153 ssh_runner.go:195] Run: systemctl --version
	I1108 09:36:57.456508 1037153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:36:57.474416 1037153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:36:57.579079 1037153 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:36:57.579169 1037153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:36:57.613501 1037153 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:36:57.613525 1037153 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:36:57.613540 1037153 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:36:57.613544 1037153 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:36:57.613572 1037153 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:36:57.613589 1037153 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:36:57.613593 1037153 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:36:57.613596 1037153 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:36:57.613599 1037153 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:36:57.613607 1037153 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:36:57.613613 1037153 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:36:57.613617 1037153 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:36:57.613621 1037153 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:36:57.613631 1037153 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:36:57.613649 1037153 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:36:57.613661 1037153 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:36:57.613665 1037153 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:36:57.613670 1037153 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:36:57.613673 1037153 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:36:57.613676 1037153 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:36:57.613682 1037153 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:36:57.613686 1037153 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:36:57.613689 1037153 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:36:57.613692 1037153 cri.go:89] found id: ""
	I1108 09:36:57.613768 1037153 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:36:57.629745 1037153 out.go:203] 
	W1108 09:36:57.633098 1037153 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:36:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:36:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:36:57.633130 1037153 out.go:285] * 
	* 
	W1108 09:36:57.641458 1037153 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:36:57.644731 1037153 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1108 09:36:39.367147 1029234 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1108 09:36:39.374551 1029234 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1108 09:36:39.374584 1029234 kapi.go:107] duration metric: took 7.449989ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.462124ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-517137 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-517137 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [0f81e110-233e-4012-8a39-a0d72a304b62] Pending
2025/11/08 09:36:50 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:352: "task-pv-pod" [0f81e110-233e-4012-8a39-a0d72a304b62] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [0f81e110-233e-4012-8a39-a0d72a304b62] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.00409151s
addons_test.go:572: (dbg) Run:  kubectl --context addons-517137 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-517137 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-517137 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-517137 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-517137 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-517137 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-517137 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [7e9b439d-59b8-4251-b44f-aeaadd9f6e52] Pending
helpers_test.go:352: "task-pv-pod-restore" [7e9b439d-59b8-4251-b44f-aeaadd9f6e52] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [7e9b439d-59b8-4251-b44f-aeaadd9f6e52] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003707329s
addons_test.go:614: (dbg) Run:  kubectl --context addons-517137 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-517137 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-517137 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (284.535929ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:37:23.259835 1037903 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:37:23.260770 1037903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:23.260786 1037903 out.go:374] Setting ErrFile to fd 2...
	I1108 09:37:23.260792 1037903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:23.261092 1037903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:37:23.261436 1037903 mustload.go:66] Loading cluster: addons-517137
	I1108 09:37:23.261842 1037903 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:23.261862 1037903 addons.go:607] checking whether the cluster is paused
	I1108 09:37:23.261996 1037903 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:23.262014 1037903 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:37:23.262519 1037903 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:37:23.279855 1037903 ssh_runner.go:195] Run: systemctl --version
	I1108 09:37:23.279912 1037903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:37:23.297140 1037903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:37:23.413044 1037903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:37:23.413139 1037903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:37:23.448293 1037903 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:37:23.448316 1037903 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:37:23.448321 1037903 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:37:23.448326 1037903 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:37:23.448329 1037903 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:37:23.448333 1037903 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:37:23.448337 1037903 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:37:23.448340 1037903 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:37:23.448344 1037903 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:37:23.448351 1037903 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:37:23.448354 1037903 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:37:23.448358 1037903 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:37:23.448361 1037903 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:37:23.448365 1037903 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:37:23.448369 1037903 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:37:23.448384 1037903 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:37:23.448392 1037903 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:37:23.448396 1037903 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:37:23.448400 1037903 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:37:23.448403 1037903 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:37:23.448408 1037903 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:37:23.448411 1037903 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:37:23.448414 1037903 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:37:23.448418 1037903 cri.go:89] found id: ""
	I1108 09:37:23.448513 1037903 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:37:23.467009 1037903 out.go:203] 
	W1108 09:37:23.472249 1037903 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:37:23.472279 1037903 out.go:285] * 
	* 
	W1108 09:37:23.480399 1037903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:37:23.484173 1037903 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (269.216252ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:37:23.548690 1037947 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:37:23.550016 1037947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:23.550062 1037947 out.go:374] Setting ErrFile to fd 2...
	I1108 09:37:23.550083 1037947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:23.550440 1037947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:37:23.550770 1037947 mustload.go:66] Loading cluster: addons-517137
	I1108 09:37:23.551231 1037947 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:23.551270 1037947 addons.go:607] checking whether the cluster is paused
	I1108 09:37:23.551426 1037947 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:23.551482 1037947 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:37:23.551984 1037947 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:37:23.568627 1037947 ssh_runner.go:195] Run: systemctl --version
	I1108 09:37:23.568686 1037947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:37:23.590438 1037947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:37:23.696230 1037947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:37:23.696304 1037947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:37:23.723463 1037947 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:37:23.723483 1037947 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:37:23.723487 1037947 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:37:23.723491 1037947 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:37:23.723494 1037947 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:37:23.723497 1037947 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:37:23.723501 1037947 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:37:23.723504 1037947 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:37:23.723507 1037947 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:37:23.723513 1037947 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:37:23.723516 1037947 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:37:23.723520 1037947 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:37:23.723527 1037947 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:37:23.723531 1037947 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:37:23.723534 1037947 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:37:23.723539 1037947 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:37:23.723542 1037947 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:37:23.723548 1037947 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:37:23.723551 1037947 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:37:23.723554 1037947 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:37:23.723559 1037947 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:37:23.723566 1037947 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:37:23.723569 1037947 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:37:23.723572 1037947 cri.go:89] found id: ""
	I1108 09:37:23.723621 1037947 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:37:23.739454 1037947 out.go:203] 
	W1108 09:37:23.742853 1037947 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:37:23.742878 1037947 out.go:285] * 
	* 
	W1108 09:37:23.751066 1037947 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:37:23.754427 1037947 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (44.40s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-517137 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-517137 --alsologtostderr -v=1: exit status 11 (294.29426ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:36:36.263510 1036207 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:36:36.264347 1036207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:36:36.264391 1036207 out.go:374] Setting ErrFile to fd 2...
	I1108 09:36:36.264412 1036207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:36:36.264900 1036207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:36:36.265402 1036207 mustload.go:66] Loading cluster: addons-517137
	I1108 09:36:36.266110 1036207 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:36:36.266154 1036207 addons.go:607] checking whether the cluster is paused
	I1108 09:36:36.266640 1036207 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:36:36.266697 1036207 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:36:36.267196 1036207 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:36:36.286485 1036207 ssh_runner.go:195] Run: systemctl --version
	I1108 09:36:36.286541 1036207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:36:36.304495 1036207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:36:36.415993 1036207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:36:36.416083 1036207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:36:36.469827 1036207 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:36:36.469854 1036207 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:36:36.469859 1036207 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:36:36.469863 1036207 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:36:36.469867 1036207 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:36:36.469871 1036207 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:36:36.469875 1036207 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:36:36.469878 1036207 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:36:36.469881 1036207 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:36:36.469887 1036207 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:36:36.469891 1036207 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:36:36.469894 1036207 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:36:36.469897 1036207 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:36:36.469900 1036207 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:36:36.469904 1036207 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:36:36.469911 1036207 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:36:36.469919 1036207 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:36:36.469923 1036207 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:36:36.469927 1036207 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:36:36.469930 1036207 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:36:36.469935 1036207 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:36:36.469938 1036207 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:36:36.469941 1036207 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:36:36.469943 1036207 cri.go:89] found id: ""
	I1108 09:36:36.470005 1036207 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:36:36.487452 1036207 out.go:203] 
	W1108 09:36:36.490482 1036207 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:36:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:36:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:36:36.490515 1036207 out.go:285] * 
	* 
	W1108 09:36:36.498718 1036207 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:36:36.501849 1036207 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-517137 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-517137
helpers_test.go:243: (dbg) docker inspect addons-517137:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96",
	        "Created": "2025-11-08T09:33:58.811367027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1030391,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:33:58.871150487Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96/hostname",
	        "HostsPath": "/var/lib/docker/containers/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96/hosts",
	        "LogPath": "/var/lib/docker/containers/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96/257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96-json.log",
	        "Name": "/addons-517137",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-517137:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-517137",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "257291073ebc84e5ba03fcc5a3ed6926a44a9fb7692f21cdd70fd2d1dbfb8a96",
	                "LowerDir": "/var/lib/docker/overlay2/db866645afeeb5823a6aa93f3283972ce4e7dead8d77e0804159a3b125b3156f-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db866645afeeb5823a6aa93f3283972ce4e7dead8d77e0804159a3b125b3156f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db866645afeeb5823a6aa93f3283972ce4e7dead8d77e0804159a3b125b3156f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db866645afeeb5823a6aa93f3283972ce4e7dead8d77e0804159a3b125b3156f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-517137",
	                "Source": "/var/lib/docker/volumes/addons-517137/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-517137",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-517137",
	                "name.minikube.sigs.k8s.io": "addons-517137",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afa35d8208a75f9f48ae9c9a21f124fdcd31e0e3fd666d101c56c88535cccfe1",
	            "SandboxKey": "/var/run/docker/netns/afa35d8208a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34225"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34226"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34229"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34228"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-517137": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:2d:20:2c:56:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "39510444f175bd235dac69fe9d69b5513ff5ee07ecfdb89db58c965ceccc7ed9",
	                    "EndpointID": "bb8bbf71b501d5268c7c8296f8abd9ef545bbc3924a8af21a70811ebb3f77da0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-517137",
	                        "257291073ebc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-517137 -n addons-517137
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-517137 logs -n 25: (1.452483037s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-554144 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-554144   │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ delete  │ -p download-only-554144                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-554144   │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ start   │ -o=json --download-only -p download-only-504302 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-504302   │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ delete  │ -p download-only-504302                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-504302   │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ delete  │ -p download-only-554144                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-554144   │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ delete  │ -p download-only-504302                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-504302   │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ start   │ --download-only -p download-docker-871809 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-871809 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ delete  │ -p download-docker-871809                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-871809 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ start   │ --download-only -p binary-mirror-870798 --alsologtostderr --binary-mirror http://127.0.0.1:37897 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-870798   │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ delete  │ -p binary-mirror-870798                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-870798   │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ addons  │ enable dashboard -p addons-517137                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ addons  │ disable dashboard -p addons-517137                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ start   │ -p addons-517137 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:36 UTC │
	│ addons  │ addons-517137 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-517137 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:36 UTC │                     │
	│ addons  │ enable headlamp -p addons-517137 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-517137          │ jenkins │ v1.37.0 │ 08 Nov 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:33:32
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:33:32.909072 1029992 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:33:32.909204 1029992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:32.909215 1029992 out.go:374] Setting ErrFile to fd 2...
	I1108 09:33:32.909221 1029992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:32.909469 1029992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:33:32.909902 1029992 out.go:368] Setting JSON to false
	I1108 09:33:32.910691 1029992 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29758,"bootTime":1762564655,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 09:33:32.910756 1029992 start.go:143] virtualization:  
	I1108 09:33:32.918256 1029992 out.go:179] * [addons-517137] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 09:33:32.924121 1029992 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:33:32.924187 1029992 notify.go:221] Checking for updates...
	I1108 09:33:32.934232 1029992 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:33:32.943966 1029992 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 09:33:32.974465 1029992 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 09:33:33.007406 1029992 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 09:33:33.039355 1029992 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:33:33.073168 1029992 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:33:33.095560 1029992 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:33:33.095687 1029992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:33:33.153055 1029992 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-08 09:33:33.141786701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:33:33.153171 1029992 docker.go:319] overlay module found
	I1108 09:33:33.183872 1029992 out.go:179] * Using the docker driver based on user configuration
	I1108 09:33:33.216889 1029992 start.go:309] selected driver: docker
	I1108 09:33:33.216917 1029992 start.go:930] validating driver "docker" against <nil>
	I1108 09:33:33.216932 1029992 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:33:33.217697 1029992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:33:33.272856 1029992 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-08 09:33:33.26409427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:33:33.273017 1029992 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:33:33.273268 1029992 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:33:33.298367 1029992 out.go:179] * Using Docker driver with root privileges
	I1108 09:33:33.343779 1029992 cni.go:84] Creating CNI manager for ""
	I1108 09:33:33.343867 1029992 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:33:33.343883 1029992 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:33:33.343973 1029992 start.go:353] cluster config:
	{Name:addons-517137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-517137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1108 09:33:33.375276 1029992 out.go:179] * Starting "addons-517137" primary control-plane node in "addons-517137" cluster
	I1108 09:33:33.406936 1029992 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:33:33.439837 1029992 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:33:33.470488 1029992 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:33:33.470573 1029992 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 09:33:33.470488 1029992 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:33:33.470585 1029992 cache.go:59] Caching tarball of preloaded images
	I1108 09:33:33.470765 1029992 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 09:33:33.470774 1029992 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:33:33.471119 1029992 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/config.json ...
	I1108 09:33:33.471140 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/config.json: {Name:mk335e1c9e903d2c98e81d98ab41a753d3cbaa26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:33:33.487085 1029992 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:33:33.487235 1029992 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 09:33:33.487260 1029992 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1108 09:33:33.487265 1029992 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1108 09:33:33.487276 1029992 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1108 09:33:33.487287 1029992 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1108 09:33:51.969325 1029992 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1108 09:33:51.969360 1029992 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:33:51.969390 1029992 start.go:360] acquireMachinesLock for addons-517137: {Name:mka52ee401f9ddfa9995f7d13ae17ba555b99bae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:33:51.969499 1029992 start.go:364] duration metric: took 90.295µs to acquireMachinesLock for "addons-517137"
	I1108 09:33:51.969524 1029992 start.go:93] Provisioning new machine with config: &{Name:addons-517137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-517137 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:33:51.969589 1029992 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:33:51.973104 1029992 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1108 09:33:51.973360 1029992 start.go:159] libmachine.API.Create for "addons-517137" (driver="docker")
	I1108 09:33:51.973397 1029992 client.go:173] LocalClient.Create starting
	I1108 09:33:51.973515 1029992 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem
	I1108 09:33:52.095410 1029992 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem
	I1108 09:33:53.163487 1029992 cli_runner.go:164] Run: docker network inspect addons-517137 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:33:53.178819 1029992 cli_runner.go:211] docker network inspect addons-517137 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:33:53.178899 1029992 network_create.go:284] running [docker network inspect addons-517137] to gather additional debugging logs...
	I1108 09:33:53.178922 1029992 cli_runner.go:164] Run: docker network inspect addons-517137
	W1108 09:33:53.195968 1029992 cli_runner.go:211] docker network inspect addons-517137 returned with exit code 1
	I1108 09:33:53.195995 1029992 network_create.go:287] error running [docker network inspect addons-517137]: docker network inspect addons-517137: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-517137 not found
	I1108 09:33:53.196009 1029992 network_create.go:289] output of [docker network inspect addons-517137]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-517137 not found
	
	** /stderr **
	I1108 09:33:53.196112 1029992 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:33:53.212737 1029992 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d38c0}
	I1108 09:33:53.212781 1029992 network_create.go:124] attempt to create docker network addons-517137 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1108 09:33:53.212850 1029992 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-517137 addons-517137
	I1108 09:33:53.273307 1029992 network_create.go:108] docker network addons-517137 192.168.49.0/24 created
	I1108 09:33:53.273340 1029992 kic.go:121] calculated static IP "192.168.49.2" for the "addons-517137" container
	I1108 09:33:53.273436 1029992 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:33:53.289707 1029992 cli_runner.go:164] Run: docker volume create addons-517137 --label name.minikube.sigs.k8s.io=addons-517137 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:33:53.308928 1029992 oci.go:103] Successfully created a docker volume addons-517137
	I1108 09:33:53.309012 1029992 cli_runner.go:164] Run: docker run --rm --name addons-517137-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-517137 --entrypoint /usr/bin/test -v addons-517137:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:33:54.322133 1029992 cli_runner.go:217] Completed: docker run --rm --name addons-517137-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-517137 --entrypoint /usr/bin/test -v addons-517137:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (1.013079687s)
	I1108 09:33:54.322183 1029992 oci.go:107] Successfully prepared a docker volume addons-517137
	I1108 09:33:54.322206 1029992 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:33:54.322223 1029992 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:33:54.322288 1029992 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-517137:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:33:58.744275 1029992 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-517137:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.42194564s)
	I1108 09:33:58.744311 1029992 kic.go:203] duration metric: took 4.422082965s to extract preloaded images to volume ...
	W1108 09:33:58.744472 1029992 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 09:33:58.744587 1029992 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:33:58.797104 1029992 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-517137 --name addons-517137 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-517137 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-517137 --network addons-517137 --ip 192.168.49.2 --volume addons-517137:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:33:59.067321 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Running}}
	I1108 09:33:59.087573 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:33:59.110786 1029992 cli_runner.go:164] Run: docker exec addons-517137 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:33:59.176689 1029992 oci.go:144] the created container "addons-517137" has a running status.
	I1108 09:33:59.176715 1029992 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa...
	I1108 09:33:59.571905 1029992 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:33:59.591320 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:33:59.607237 1029992 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:33:59.607254 1029992 kic_runner.go:114] Args: [docker exec --privileged addons-517137 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:33:59.646010 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:33:59.662296 1029992 machine.go:94] provisionDockerMachine start ...
	I1108 09:33:59.662392 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:33:59.679640 1029992 main.go:143] libmachine: Using SSH client type: native
	I1108 09:33:59.679977 1029992 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1108 09:33:59.679996 1029992 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:33:59.680598 1029992 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 09:34:02.832656 1029992 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-517137
	
	I1108 09:34:02.832684 1029992 ubuntu.go:182] provisioning hostname "addons-517137"
	I1108 09:34:02.832747 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:02.852193 1029992 main.go:143] libmachine: Using SSH client type: native
	I1108 09:34:02.852641 1029992 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1108 09:34:02.852659 1029992 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-517137 && echo "addons-517137" | sudo tee /etc/hostname
	I1108 09:34:03.015851 1029992 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-517137
	
	I1108 09:34:03.015937 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:03.034720 1029992 main.go:143] libmachine: Using SSH client type: native
	I1108 09:34:03.035040 1029992 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1108 09:34:03.035063 1029992 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-517137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-517137/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-517137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:34:03.184976 1029992 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:34:03.185049 1029992 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 09:34:03.185074 1029992 ubuntu.go:190] setting up certificates
	I1108 09:34:03.185084 1029992 provision.go:84] configureAuth start
	I1108 09:34:03.185146 1029992 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-517137
	I1108 09:34:03.202935 1029992 provision.go:143] copyHostCerts
	I1108 09:34:03.203035 1029992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 09:34:03.203173 1029992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 09:34:03.203264 1029992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 09:34:03.203343 1029992 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.addons-517137 san=[127.0.0.1 192.168.49.2 addons-517137 localhost minikube]
	I1108 09:34:03.750105 1029992 provision.go:177] copyRemoteCerts
	I1108 09:34:03.750181 1029992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:34:03.750223 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:03.768815 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:03.876611 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:34:03.894410 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 09:34:03.911483 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 09:34:03.929442 1029992 provision.go:87] duration metric: took 744.34399ms to configureAuth
	I1108 09:34:03.929519 1029992 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:34:03.929736 1029992 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:34:03.929846 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:03.946608 1029992 main.go:143] libmachine: Using SSH client type: native
	I1108 09:34:03.946917 1029992 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1108 09:34:03.946937 1029992 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:34:04.204712 1029992 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:34:04.204737 1029992 machine.go:97] duration metric: took 4.542415609s to provisionDockerMachine
	I1108 09:34:04.204749 1029992 client.go:176] duration metric: took 12.231341523s to LocalClient.Create
	I1108 09:34:04.204762 1029992 start.go:167] duration metric: took 12.231407121s to libmachine.API.Create "addons-517137"
	I1108 09:34:04.204769 1029992 start.go:293] postStartSetup for "addons-517137" (driver="docker")
	I1108 09:34:04.204779 1029992 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:34:04.204847 1029992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:34:04.204891 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:04.223539 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:04.332983 1029992 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:34:04.336380 1029992 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:34:04.336406 1029992 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:34:04.336418 1029992 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 09:34:04.336511 1029992 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 09:34:04.336540 1029992 start.go:296] duration metric: took 131.765751ms for postStartSetup
	I1108 09:34:04.336858 1029992 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-517137
	I1108 09:34:04.353551 1029992 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/config.json ...
	I1108 09:34:04.353848 1029992 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:34:04.353898 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:04.370483 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:04.473361 1029992 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:34:04.478008 1029992 start.go:128] duration metric: took 12.508404461s to createHost
	I1108 09:34:04.478031 1029992 start.go:83] releasing machines lock for "addons-517137", held for 12.508523826s
	I1108 09:34:04.478100 1029992 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-517137
	I1108 09:34:04.494705 1029992 ssh_runner.go:195] Run: cat /version.json
	I1108 09:34:04.494731 1029992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:34:04.494760 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:04.494789 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:04.512700 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:04.514365 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:04.703191 1029992 ssh_runner.go:195] Run: systemctl --version
	I1108 09:34:04.709235 1029992 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:34:04.746835 1029992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:34:04.751093 1029992 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:34:04.751163 1029992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:34:04.780742 1029992 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 09:34:04.780768 1029992 start.go:496] detecting cgroup driver to use...
	I1108 09:34:04.780801 1029992 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 09:34:04.780853 1029992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:34:04.797719 1029992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:34:04.810268 1029992 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:34:04.810329 1029992 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:34:04.828064 1029992 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:34:04.845877 1029992 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:34:04.964854 1029992 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:34:05.106254 1029992 docker.go:234] disabling docker service ...
	I1108 09:34:05.106323 1029992 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:34:05.129065 1029992 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:34:05.143126 1029992 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:34:05.265935 1029992 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:34:05.384903 1029992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:34:05.398509 1029992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:34:05.413579 1029992 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:34:05.413657 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.422733 1029992 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 09:34:05.422812 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.431789 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.440353 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.449378 1029992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:34:05.457657 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.466504 1029992 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.479757 1029992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:34:05.488344 1029992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:34:05.495870 1029992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:34:05.503208 1029992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:34:05.619609 1029992 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:34:05.742575 1029992 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:34:05.742719 1029992 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:34:05.746540 1029992 start.go:564] Will wait 60s for crictl version
	I1108 09:34:05.746654 1029992 ssh_runner.go:195] Run: which crictl
	I1108 09:34:05.750356 1029992 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:34:05.783474 1029992 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:34:05.783610 1029992 ssh_runner.go:195] Run: crio --version
	I1108 09:34:05.814443 1029992 ssh_runner.go:195] Run: crio --version
	I1108 09:34:05.846790 1029992 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:34:05.849832 1029992 cli_runner.go:164] Run: docker network inspect addons-517137 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:34:05.866436 1029992 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1108 09:34:05.870250 1029992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:34:05.880210 1029992 kubeadm.go:884] updating cluster {Name:addons-517137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-517137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:34:05.880324 1029992 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:34:05.880390 1029992 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:34:05.919054 1029992 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:34:05.919084 1029992 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:34:05.919155 1029992 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:34:05.943619 1029992 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:34:05.943646 1029992 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:34:05.943655 1029992 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1108 09:34:05.943756 1029992 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-517137 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-517137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:34:05.943846 1029992 ssh_runner.go:195] Run: crio config
	I1108 09:34:06.008489 1029992 cni.go:84] Creating CNI manager for ""
	I1108 09:34:06.008518 1029992 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:34:06.008551 1029992 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:34:06.008582 1029992 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-517137 NodeName:addons-517137 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:34:06.008737 1029992 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-517137"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:34:06.008823 1029992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:34:06.018358 1029992 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:34:06.018490 1029992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:34:06.027005 1029992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1108 09:34:06.041206 1029992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:34:06.056264 1029992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1108 09:34:06.069881 1029992 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:34:06.073623 1029992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:34:06.083822 1029992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:34:06.200023 1029992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:34:06.223537 1029992 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137 for IP: 192.168.49.2
	I1108 09:34:06.223556 1029992 certs.go:195] generating shared ca certs ...
	I1108 09:34:06.223571 1029992 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:06.223775 1029992 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 09:34:07.150941 1029992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt ...
	I1108 09:34:07.150971 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt: {Name:mkea0a47b63d07c9c4a4b5d0cf2668280a966698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:07.151174 1029992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key ...
	I1108 09:34:07.151194 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key: {Name:mk0561239475f2ae8f7a9724b7319a0d1d2c4d72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:07.151288 1029992 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 09:34:08.495820 1029992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt ...
	I1108 09:34:08.495851 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt: {Name:mk27f92ce91dda6a8215eb48ff9f10d8956c1f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:08.496043 1029992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key ...
	I1108 09:34:08.496060 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key: {Name:mk4e740b08be2b3d57948460919940456d8b5a0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:08.496155 1029992 certs.go:257] generating profile certs ...
	I1108 09:34:08.496223 1029992 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.key
	I1108 09:34:08.496241 1029992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt with IP's: []
	I1108 09:34:08.577578 1029992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt ...
	I1108 09:34:08.577605 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: {Name:mkc6cd9af3a8375ea817435a28926e86a1c5755f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:08.577773 1029992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.key ...
	I1108 09:34:08.577785 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.key: {Name:mka3f298034f0eb8f75532892e7a985f90e8783c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:08.577872 1029992 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key.f79cf1b5
	I1108 09:34:08.577891 1029992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt.f79cf1b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1108 09:34:09.096832 1029992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt.f79cf1b5 ...
	I1108 09:34:09.096867 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt.f79cf1b5: {Name:mk582960f4b151b73b42101173cb1c0c6f453aef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:09.097069 1029992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key.f79cf1b5 ...
	I1108 09:34:09.097083 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key.f79cf1b5: {Name:mka40fcb07ec47c39853afbe93849b08252ab5b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:09.097167 1029992 certs.go:382] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt.f79cf1b5 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt
	I1108 09:34:09.097254 1029992 certs.go:386] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key.f79cf1b5 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key
	I1108 09:34:09.097313 1029992 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.key
	I1108 09:34:09.097336 1029992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.crt with IP's: []
	I1108 09:34:09.824141 1029992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.crt ...
	I1108 09:34:09.824173 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.crt: {Name:mk77d47d458cc2c30cb8ef24936b30c34e8d441e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:09.824356 1029992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.key ...
	I1108 09:34:09.824370 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.key: {Name:mk181d8aa5d1570bf50dcdb8669d2a966f8263db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:09.824582 1029992 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:34:09.824626 1029992 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 09:34:09.824651 1029992 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:34:09.824681 1029992 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 09:34:09.825370 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:34:09.842500 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:34:09.860460 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:34:09.877073 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 09:34:09.893039 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:34:09.910084 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 09:34:09.927099 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:34:09.943691 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:34:09.961049 1029992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:34:09.977966 1029992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:34:09.990899 1029992 ssh_runner.go:195] Run: openssl version
	I1108 09:34:09.997012 1029992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:34:10.005264 1029992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:34:10.010645 1029992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:34:10.010827 1029992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:34:10.055628 1029992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:34:10.064094 1029992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:34:10.067617 1029992 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:34:10.067675 1029992 kubeadm.go:401] StartCluster: {Name:addons-517137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-517137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:34:10.067761 1029992 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:34:10.067824 1029992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:34:10.098971 1029992 cri.go:89] found id: ""
	I1108 09:34:10.099064 1029992 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:34:10.110532 1029992 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:34:10.119540 1029992 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:34:10.119645 1029992 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:34:10.129253 1029992 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:34:10.129315 1029992 kubeadm.go:158] found existing configuration files:
	
	I1108 09:34:10.129401 1029992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:34:10.138188 1029992 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:34:10.138309 1029992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:34:10.146525 1029992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:34:10.155493 1029992 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:34:10.155686 1029992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:34:10.163208 1029992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:34:10.171289 1029992 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:34:10.171384 1029992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:34:10.178683 1029992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:34:10.186297 1029992 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:34:10.186397 1029992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:34:10.193865 1029992 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:34:10.236336 1029992 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:34:10.236852 1029992 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:34:10.258529 1029992 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:34:10.258656 1029992 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 09:34:10.258760 1029992 kubeadm.go:319] OS: Linux
	I1108 09:34:10.258872 1029992 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:34:10.258959 1029992 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 09:34:10.259046 1029992 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:34:10.259140 1029992 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:34:10.259284 1029992 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:34:10.259370 1029992 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:34:10.259452 1029992 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:34:10.259539 1029992 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:34:10.259620 1029992 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 09:34:10.321228 1029992 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:34:10.321420 1029992 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:34:10.321563 1029992 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:34:10.333485 1029992 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:34:10.339867 1029992 out.go:252]   - Generating certificates and keys ...
	I1108 09:34:10.339992 1029992 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:34:10.340074 1029992 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:34:10.989614 1029992 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:34:11.581625 1029992 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:34:11.674110 1029992 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:34:12.677368 1029992 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:34:13.348916 1029992 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:34:13.349261 1029992 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-517137 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:34:13.494759 1029992 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:34:13.495123 1029992 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-517137 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:34:13.601783 1029992 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:34:13.960205 1029992 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:34:14.596820 1029992 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:34:14.596912 1029992 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:34:15.572739 1029992 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:34:15.979389 1029992 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:34:16.257640 1029992 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:34:16.885254 1029992 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:34:17.018192 1029992 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:34:17.018935 1029992 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:34:17.021832 1029992 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:34:17.025443 1029992 out.go:252]   - Booting up control plane ...
	I1108 09:34:17.025555 1029992 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:34:17.025645 1029992 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:34:17.026433 1029992 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:34:17.041466 1029992 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:34:17.041619 1029992 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:34:17.051630 1029992 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:34:17.051755 1029992 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:34:17.051815 1029992 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:34:17.180955 1029992 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:34:17.181098 1029992 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:34:18.681509 1029992 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500839051s
	I1108 09:34:18.685085 1029992 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:34:18.685188 1029992 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1108 09:34:18.685307 1029992 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:34:18.685398 1029992 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:34:22.202819 1029992 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.516920789s
	I1108 09:34:23.135802 1029992 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.450681413s
	I1108 09:34:24.686653 1029992 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001473454s
	I1108 09:34:24.709533 1029992 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:34:24.721577 1029992 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:34:24.735763 1029992 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:34:24.736019 1029992 kubeadm.go:319] [mark-control-plane] Marking the node addons-517137 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:34:24.747586 1029992 kubeadm.go:319] [bootstrap-token] Using token: ahprr5.dno4v0t3rz7ucop8
	I1108 09:34:24.752674 1029992 out.go:252]   - Configuring RBAC rules ...
	I1108 09:34:24.752831 1029992 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:34:24.757999 1029992 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:34:24.767046 1029992 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:34:24.772158 1029992 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:34:24.780208 1029992 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:34:24.784639 1029992 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:34:25.094305 1029992 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:34:25.548524 1029992 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:34:26.093606 1029992 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:34:26.094884 1029992 kubeadm.go:319] 
	I1108 09:34:26.094961 1029992 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:34:26.094985 1029992 kubeadm.go:319] 
	I1108 09:34:26.095066 1029992 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:34:26.095071 1029992 kubeadm.go:319] 
	I1108 09:34:26.095097 1029992 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:34:26.095159 1029992 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:34:26.095223 1029992 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:34:26.095230 1029992 kubeadm.go:319] 
	I1108 09:34:26.095286 1029992 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:34:26.095291 1029992 kubeadm.go:319] 
	I1108 09:34:26.095340 1029992 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:34:26.095344 1029992 kubeadm.go:319] 
	I1108 09:34:26.095398 1029992 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:34:26.095477 1029992 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:34:26.095549 1029992 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:34:26.095553 1029992 kubeadm.go:319] 
	I1108 09:34:26.095642 1029992 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:34:26.095722 1029992 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:34:26.095726 1029992 kubeadm.go:319] 
	I1108 09:34:26.095813 1029992 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ahprr5.dno4v0t3rz7ucop8 \
	I1108 09:34:26.095921 1029992 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 \
	I1108 09:34:26.095943 1029992 kubeadm.go:319] 	--control-plane 
	I1108 09:34:26.095947 1029992 kubeadm.go:319] 
	I1108 09:34:26.096036 1029992 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:34:26.096040 1029992 kubeadm.go:319] 
	I1108 09:34:26.096126 1029992 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ahprr5.dno4v0t3rz7ucop8 \
	I1108 09:34:26.096232 1029992 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 
	I1108 09:34:26.100146 1029992 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 09:34:26.100424 1029992 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 09:34:26.100602 1029992 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:34:26.100650 1029992 cni.go:84] Creating CNI manager for ""
	I1108 09:34:26.100666 1029992 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:34:26.103865 1029992 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:34:26.106792 1029992 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:34:26.110872 1029992 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:34:26.110903 1029992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:34:26.126567 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:34:26.441973 1029992 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:34:26.442067 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:26.442133 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-517137 minikube.k8s.io/updated_at=2025_11_08T09_34_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=addons-517137 minikube.k8s.io/primary=true
	I1108 09:34:26.458809 1029992 ops.go:34] apiserver oom_adj: -16
	I1108 09:34:26.576637 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:27.076753 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:27.577063 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:28.077063 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:28.576712 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:29.077619 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:29.576694 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:30.077733 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:30.577657 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:31.076742 1029992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:34:31.217295 1029992 kubeadm.go:1114] duration metric: took 4.775289541s to wait for elevateKubeSystemPrivileges
	I1108 09:34:31.217323 1029992 kubeadm.go:403] duration metric: took 21.149651168s to StartCluster
	I1108 09:34:31.217340 1029992 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:31.217455 1029992 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 09:34:31.217845 1029992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:34:31.218035 1029992 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:34:31.218255 1029992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:34:31.218553 1029992 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1108 09:34:31.218658 1029992 addons.go:70] Setting yakd=true in profile "addons-517137"
	I1108 09:34:31.218672 1029992 addons.go:239] Setting addon yakd=true in "addons-517137"
	I1108 09:34:31.218694 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.219195 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.219719 1029992 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:34:31.219873 1029992 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-517137"
	I1108 09:34:31.219901 1029992 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-517137"
	I1108 09:34:31.219929 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.220076 1029992 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-517137"
	I1108 09:34:31.220134 1029992 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-517137"
	I1108 09:34:31.220171 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.220362 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.220707 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.225134 1029992 addons.go:70] Setting registry=true in profile "addons-517137"
	I1108 09:34:31.225167 1029992 addons.go:239] Setting addon registry=true in "addons-517137"
	I1108 09:34:31.225202 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.225328 1029992 addons.go:70] Setting cloud-spanner=true in profile "addons-517137"
	I1108 09:34:31.225348 1029992 addons.go:239] Setting addon cloud-spanner=true in "addons-517137"
	I1108 09:34:31.225367 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.225795 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.226238 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.227429 1029992 out.go:179] * Verifying Kubernetes components...
	I1108 09:34:31.227711 1029992 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-517137"
	I1108 09:34:31.227780 1029992 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-517137"
	I1108 09:34:31.227816 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.228192 1029992 addons.go:70] Setting registry-creds=true in profile "addons-517137"
	I1108 09:34:31.228210 1029992 addons.go:239] Setting addon registry-creds=true in "addons-517137"
	I1108 09:34:31.228234 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.228247 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.228800 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.243915 1029992 addons.go:70] Setting storage-provisioner=true in profile "addons-517137"
	I1108 09:34:31.243973 1029992 addons.go:239] Setting addon storage-provisioner=true in "addons-517137"
	I1108 09:34:31.244075 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.244681 1029992 addons.go:70] Setting default-storageclass=true in profile "addons-517137"
	I1108 09:34:31.244705 1029992 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-517137"
	I1108 09:34:31.244881 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.244975 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.258696 1029992 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-517137"
	I1108 09:34:31.258726 1029992 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-517137"
	I1108 09:34:31.259063 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.276809 1029992 addons.go:70] Setting volcano=true in profile "addons-517137"
	I1108 09:34:31.276843 1029992 addons.go:239] Setting addon volcano=true in "addons-517137"
	I1108 09:34:31.276898 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.277380 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.280128 1029992 addons.go:70] Setting gcp-auth=true in profile "addons-517137"
	I1108 09:34:31.280218 1029992 mustload.go:66] Loading cluster: addons-517137
	I1108 09:34:31.287760 1029992 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:34:31.288207 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.298686 1029992 addons.go:70] Setting volumesnapshots=true in profile "addons-517137"
	I1108 09:34:31.298716 1029992 addons.go:239] Setting addon volumesnapshots=true in "addons-517137"
	I1108 09:34:31.298753 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.299253 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.308644 1029992 addons.go:70] Setting ingress=true in profile "addons-517137"
	I1108 09:34:31.308720 1029992 addons.go:239] Setting addon ingress=true in "addons-517137"
	I1108 09:34:31.308796 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.309336 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.356705 1029992 addons.go:70] Setting ingress-dns=true in profile "addons-517137"
	I1108 09:34:31.356755 1029992 addons.go:239] Setting addon ingress-dns=true in "addons-517137"
	I1108 09:34:31.356813 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.357393 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.358200 1029992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:34:31.388269 1029992 addons.go:70] Setting inspektor-gadget=true in profile "addons-517137"
	I1108 09:34:31.388300 1029992 addons.go:239] Setting addon inspektor-gadget=true in "addons-517137"
	I1108 09:34:31.388346 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.388996 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.423940 1029992 addons.go:70] Setting metrics-server=true in profile "addons-517137"
	I1108 09:34:31.423969 1029992 addons.go:239] Setting addon metrics-server=true in "addons-517137"
	I1108 09:34:31.424012 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.424494 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.490598 1029992 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1108 09:34:31.500888 1029992 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1108 09:34:31.501174 1029992 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1108 09:34:31.501997 1029992 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1108 09:34:31.522708 1029992 addons.go:239] Setting addon default-storageclass=true in "addons-517137"
	I1108 09:34:31.525827 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.526547 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.530416 1029992 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:34:31.530491 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1108 09:34:31.530592 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.523689 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.520641 1029992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:34:31.521525 1029992 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1108 09:34:31.553895 1029992 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1108 09:34:31.554046 1029992 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:34:31.554064 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1108 09:34:31.554163 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.554798 1029992 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1108 09:34:31.554817 1029992 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1108 09:34:31.554872 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.521518 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1108 09:34:31.560755 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1108 09:34:31.566828 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	W1108 09:34:31.523779 1029992 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1108 09:34:31.572654 1029992 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:34:31.573008 1029992 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1108 09:34:31.573024 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1108 09:34:31.573091 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.577782 1029992 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1108 09:34:31.578201 1029992 out.go:179]   - Using image docker.io/registry:3.0.0
	I1108 09:34:31.578541 1029992 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:34:31.578561 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:34:31.578630 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.586731 1029992 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:34:31.586753 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1108 09:34:31.586816 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.600171 1029992 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:34:31.600201 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1108 09:34:31.600259 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.604588 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1108 09:34:31.607925 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1108 09:34:31.607950 1029992 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1108 09:34:31.608020 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.618020 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1108 09:34:31.618325 1029992 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1108 09:34:31.618347 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1108 09:34:31.618410 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.634777 1029992 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:34:31.644165 1029992 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1108 09:34:31.645535 1029992 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-517137"
	I1108 09:34:31.645573 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:31.645992 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:31.694218 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.696073 1029992 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:34:31.696091 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1108 09:34:31.696155 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.700602 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.701148 1029992 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:34:31.701370 1029992 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1108 09:34:31.703270 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1108 09:34:31.703588 1029992 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:34:31.704374 1029992 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:34:31.704460 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.711729 1029992 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 09:34:31.711753 1029992 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 09:34:31.711822 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.724256 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1108 09:34:31.726054 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.727126 1029992 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1108 09:34:31.728322 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.731322 1029992 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:34:31.731347 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1108 09:34:31.731411 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.732282 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1108 09:34:31.740146 1029992 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1108 09:34:31.743610 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1108 09:34:31.743647 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1108 09:34:31.743717 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.756384 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.830517 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.832553 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.845367 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.853297 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.870519 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.899821 1029992 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1108 09:34:31.905270 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.910158 1029992 out.go:179]   - Using image docker.io/busybox:stable
	I1108 09:34:31.910478 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.913230 1029992 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:34:31.913253 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1108 09:34:31.913323 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:31.918235 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.921426 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:31.948678 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:32.016428 1029992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:34:32.399943 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:34:32.523820 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:34:32.532211 1029992 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1108 09:34:32.532237 1029992 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1108 09:34:32.610140 1029992 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 09:34:32.610163 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1108 09:34:32.628101 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:34:32.640105 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:34:32.715121 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1108 09:34:32.719684 1029992 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1108 09:34:32.719769 1029992 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1108 09:34:32.721992 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:34:32.750533 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:34:32.759624 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:34:32.767117 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:34:32.769450 1029992 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 09:34:32.769517 1029992 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 09:34:32.772859 1029992 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1108 09:34:32.772923 1029992 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1108 09:34:32.774875 1029992 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1108 09:34:32.774935 1029992 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1108 09:34:32.777350 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:34:32.789639 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1108 09:34:32.789711 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1108 09:34:32.896848 1029992 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:34:32.896872 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1108 09:34:32.898578 1029992 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1108 09:34:32.898599 1029992 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1108 09:34:32.932923 1029992 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:34:32.932946 1029992 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 09:34:32.959325 1029992 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1108 09:34:32.959403 1029992 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1108 09:34:32.963426 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1108 09:34:32.963506 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1108 09:34:33.040408 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:34:33.064014 1029992 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1108 09:34:33.064091 1029992 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1108 09:34:33.104511 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1108 09:34:33.104589 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1108 09:34:33.169107 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:34:33.171349 1029992 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:34:33.171415 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1108 09:34:33.177127 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1108 09:34:33.177199 1029992 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1108 09:34:33.244375 1029992 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.693705969s)
	I1108 09:34:33.244463 1029992 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1108 09:34:33.244515 1029992 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.228054601s)
	I1108 09:34:33.246033 1029992 node_ready.go:35] waiting up to 6m0s for node "addons-517137" to be "Ready" ...
	I1108 09:34:33.255162 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1108 09:34:33.255255 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1108 09:34:33.350098 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:34:33.400259 1029992 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:34:33.400330 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1108 09:34:33.455033 1029992 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1108 09:34:33.455107 1029992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1108 09:34:33.629042 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1108 09:34:33.629114 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1108 09:34:33.675171 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:34:33.750013 1029992 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-517137" context rescaled to 1 replicas
	I1108 09:34:33.881988 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1108 09:34:33.882062 1029992 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1108 09:34:34.104769 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1108 09:34:34.104839 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1108 09:34:34.339333 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1108 09:34:34.339412 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1108 09:34:34.510508 1029992 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 09:34:34.510533 1029992 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1108 09:34:34.741594 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1108 09:34:35.270467 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:35.961846 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.561827869s)
	I1108 09:34:35.961925 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.333751137s)
	I1108 09:34:35.961890 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.438045064s)
	I1108 09:34:36.622217 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.982039632s)
	I1108 09:34:36.622411 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.907216937s)
	I1108 09:34:37.390769 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.668702732s)
	I1108 09:34:37.390805 1029992 addons.go:480] Verifying addon ingress=true in "addons-517137"
	I1108 09:34:37.390983 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.640381507s)
	I1108 09:34:37.391012 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.631329168s)
	I1108 09:34:37.391026 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.623851507s)
	I1108 09:34:37.391117 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.613711871s)
	I1108 09:34:37.391145 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.350660533s)
	I1108 09:34:37.391156 1029992 addons.go:480] Verifying addon registry=true in "addons-517137"
	I1108 09:34:37.391232 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.041058857s)
	I1108 09:34:37.391443 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.71617951s)
	W1108 09:34:37.392089 1029992 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:34:37.392122 1029992 retry.go:31] will retry after 195.795653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:34:37.391459 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.222283377s)
	I1108 09:34:37.392148 1029992 addons.go:480] Verifying addon metrics-server=true in "addons-517137"
	I1108 09:34:37.394182 1029992 out.go:179] * Verifying ingress addon...
	I1108 09:34:37.396252 1029992 out.go:179] * Verifying registry addon...
	I1108 09:34:37.398169 1029992 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-517137 service yakd-dashboard -n yakd-dashboard
	
	I1108 09:34:37.399044 1029992 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1108 09:34:37.400816 1029992 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1108 09:34:37.405876 1029992 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:34:37.405899 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:37.407727 1029992 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1108 09:34:37.407748 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:37.588217 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:34:37.668545 1029992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.926910948s)
	I1108 09:34:37.668579 1029992 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-517137"
	I1108 09:34:37.671515 1029992 out.go:179] * Verifying csi-hostpath-driver addon...
	I1108 09:34:37.675092 1029992 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1108 09:34:37.689238 1029992 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:34:37.689270 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:37.749912 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:37.905505 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:37.905902 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:38.179980 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:38.405875 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:38.406390 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:38.678491 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:38.904350 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:38.904483 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:39.180935 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:39.189135 1029992 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1108 09:34:39.189220 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:39.205929 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:39.317994 1029992 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1108 09:34:39.331942 1029992 addons.go:239] Setting addon gcp-auth=true in "addons-517137"
	I1108 09:34:39.331999 1029992 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:34:39.332469 1029992 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:34:39.349639 1029992 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1108 09:34:39.349701 1029992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:34:39.367704 1029992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:34:39.411830 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:39.412177 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:39.470849 1029992 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1108 09:34:39.473370 1029992 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:34:39.475851 1029992 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1108 09:34:39.475873 1029992 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1108 09:34:39.488881 1029992 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1108 09:34:39.488903 1029992 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1108 09:34:39.501883 1029992 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:34:39.501911 1029992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1108 09:34:39.514803 1029992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:34:39.679422 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:39.750059 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:39.904952 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:39.906127 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:40.022187 1029992 addons.go:480] Verifying addon gcp-auth=true in "addons-517137"
	I1108 09:34:40.025621 1029992 out.go:179] * Verifying gcp-auth addon...
	I1108 09:34:40.030702 1029992 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1108 09:34:40.040969 1029992 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1108 09:34:40.041036 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:40.179454 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:40.408866 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:40.409921 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:40.534195 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:40.678000 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:40.902834 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:40.903381 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:41.034539 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:41.178364 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:41.402364 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:41.404743 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:41.534354 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:41.678350 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:41.902032 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:41.903616 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:42.042236 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:42.182465 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:42.250315 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:42.405300 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:42.406107 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:42.533772 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:42.678685 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:42.903398 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:42.904597 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:43.033930 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:43.177982 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:43.405228 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:43.405933 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:43.534273 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:43.678302 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:43.902760 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:43.905755 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:44.035066 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:44.178055 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:44.408074 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:44.408751 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:44.533927 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:44.678700 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:44.749465 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:44.903066 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:44.903724 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:45.036810 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:45.180805 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:45.407499 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:45.408321 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:45.534706 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:45.678761 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:45.902361 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:45.904563 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:46.035080 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:46.178126 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:46.403863 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:46.404022 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:46.533804 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:46.678447 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:46.902888 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:46.903426 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:47.034588 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:47.178332 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:47.249446 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:47.404617 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:47.404727 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:47.533882 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:47.678537 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:47.902058 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:47.903890 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:48.034433 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:48.178144 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:48.408168 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:48.409487 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:48.534246 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:48.678437 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:48.903292 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:48.904635 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:49.034393 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:49.178416 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:49.249635 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:49.406254 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:49.406384 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:49.534830 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:49.678572 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:49.902421 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:49.904123 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:50.034547 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:50.178726 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:50.403314 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:50.404375 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:50.534321 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:50.677994 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:50.903385 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:50.903830 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:51.033994 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:51.178999 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:51.249925 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:51.406331 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:51.406470 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:51.533802 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:51.678762 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:51.903684 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:51.904059 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:52.034603 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:52.178815 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:52.408274 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:52.408857 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:52.533736 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:52.678777 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:52.903071 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:52.903886 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:53.034115 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:53.177866 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:53.409367 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:53.410220 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:53.549364 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:53.678639 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:53.749740 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:53.903396 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:53.903525 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:54.034718 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:54.179221 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:54.402300 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:54.404273 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:54.534283 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:54.678011 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:54.903447 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:54.903720 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:55.033853 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:55.178716 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:55.409199 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:55.409502 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:55.535088 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:55.677859 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:55.903020 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:55.904291 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:56.034324 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:56.178583 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:56.249126 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:56.405392 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:56.405941 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:56.535180 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:56.678103 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:56.903692 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:56.903920 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:57.034037 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:57.179155 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:57.403475 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:57.404746 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:57.534271 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:57.677985 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:57.902921 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:57.903883 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:58.038183 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:58.179088 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:34:58.249834 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:34:58.408986 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:58.409237 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:58.534008 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:58.678741 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:58.903493 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:58.903943 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:59.034057 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:59.178790 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:59.403246 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:59.404339 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:34:59.534972 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:34:59.678830 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:34:59.902917 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:34:59.903862 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:00.040819 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:00.199027 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:00.409333 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:00.411642 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:00.535077 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:00.679113 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:00.749366 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:00.902719 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:00.903387 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:01.035001 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:01.179615 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:01.404290 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:01.405037 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:01.535060 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:01.680222 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:01.902319 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:01.904386 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:02.034614 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:02.178558 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:02.403789 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:02.405959 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:02.535972 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:02.678630 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:02.902080 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:02.905162 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:03.034389 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:03.178398 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:03.249316 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:03.408504 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:03.409373 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:03.534506 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:03.678668 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:03.902763 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:03.903235 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:04.034402 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:04.178311 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:04.404106 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:04.404323 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:04.534935 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:04.678835 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:04.903914 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:04.904031 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:05.034250 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:05.178074 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:05.249635 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:05.405670 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:05.406236 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:05.534386 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:05.678602 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:05.903099 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:05.903625 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:06.034101 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:06.178888 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:06.403712 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:06.404958 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:06.534519 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:06.678137 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:06.902951 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:06.903309 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:07.034395 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:07.178535 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:07.402159 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:07.409681 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:07.534667 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:07.678451 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:07.749453 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:07.902869 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:07.904055 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:08.034507 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:08.178585 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:08.405199 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:08.405642 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:08.534609 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:08.678393 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:08.902485 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:08.904633 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:09.034558 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:09.178517 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:09.408370 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:09.408394 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:09.534060 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:09.678039 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:09.749874 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:09.903253 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:09.903722 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:10.034612 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:10.178904 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:10.404780 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:10.405483 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:10.534505 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:10.678366 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:10.903316 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:10.904194 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:11.034164 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:11.178739 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:11.402269 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:11.408268 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:11.534260 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:11.678503 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:11.902426 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:11.903865 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:12.034613 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:12.178583 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:35:12.249439 1029992 node_ready.go:57] node "addons-517137" has "Ready":"False" status (will retry)
	I1108 09:35:12.404570 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:12.405168 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:12.534216 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:12.678068 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:12.931236 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:12.936645 1029992 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:35:12.936667 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:13.034620 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:13.207796 1029992 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:35:13.207880 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:13.289360 1029992 node_ready.go:49] node "addons-517137" is "Ready"
	I1108 09:35:13.289439 1029992 node_ready.go:38] duration metric: took 40.043247569s for node "addons-517137" to be "Ready" ...
	I1108 09:35:13.289467 1029992 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:35:13.289546 1029992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:35:13.331304 1029992 api_server.go:72] duration metric: took 42.113241495s to wait for apiserver process to appear ...
	I1108 09:35:13.331329 1029992 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:35:13.331349 1029992 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1108 09:35:13.361555 1029992 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1108 09:35:13.365617 1029992 api_server.go:141] control plane version: v1.34.1
	I1108 09:35:13.365732 1029992 api_server.go:131] duration metric: took 34.395139ms to wait for apiserver health ...
	I1108 09:35:13.365763 1029992 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:35:13.389518 1029992 system_pods.go:59] 19 kube-system pods found
	I1108 09:35:13.389554 1029992 system_pods.go:61] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Pending
	I1108 09:35:13.389564 1029992 system_pods.go:61] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:13.389570 1029992 system_pods.go:61] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending
	I1108 09:35:13.389604 1029992 system_pods.go:61] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending
	I1108 09:35:13.389618 1029992 system_pods.go:61] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:13.389623 1029992 system_pods.go:61] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:13.389627 1029992 system_pods.go:61] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:13.389632 1029992 system_pods.go:61] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:13.389645 1029992 system_pods.go:61] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:13.389650 1029992 system_pods.go:61] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:13.389655 1029992 system_pods.go:61] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:13.389688 1029992 system_pods.go:61] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:13.389701 1029992 system_pods.go:61] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending
	I1108 09:35:13.389711 1029992 system_pods.go:61] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:13.389721 1029992 system_pods.go:61] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending
	I1108 09:35:13.389726 1029992 system_pods.go:61] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending
	I1108 09:35:13.389731 1029992 system_pods.go:61] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending
	I1108 09:35:13.389738 1029992 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:13.389774 1029992 system_pods.go:61] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:35:13.389790 1029992 system_pods.go:74] duration metric: took 24.016879ms to wait for pod list to return data ...
	I1108 09:35:13.389805 1029992 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:35:13.396894 1029992 default_sa.go:45] found service account: "default"
	I1108 09:35:13.396931 1029992 default_sa.go:55] duration metric: took 7.110797ms for default service account to be created ...
	I1108 09:35:13.396942 1029992 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:35:13.412288 1029992 system_pods.go:86] 19 kube-system pods found
	I1108 09:35:13.412322 1029992 system_pods.go:89] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Pending
	I1108 09:35:13.412332 1029992 system_pods.go:89] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:13.412364 1029992 system_pods.go:89] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending
	I1108 09:35:13.412379 1029992 system_pods.go:89] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending
	I1108 09:35:13.412384 1029992 system_pods.go:89] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:13.412388 1029992 system_pods.go:89] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:13.412393 1029992 system_pods.go:89] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:13.412398 1029992 system_pods.go:89] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:13.412411 1029992 system_pods.go:89] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:13.412447 1029992 system_pods.go:89] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:13.412454 1029992 system_pods.go:89] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:13.412461 1029992 system_pods.go:89] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:13.412465 1029992 system_pods.go:89] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending
	I1108 09:35:13.412471 1029992 system_pods.go:89] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:13.412475 1029992 system_pods.go:89] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending
	I1108 09:35:13.412480 1029992 system_pods.go:89] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending
	I1108 09:35:13.412506 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending
	I1108 09:35:13.412522 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:13.412530 1029992 system_pods.go:89] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:35:13.412545 1029992 retry.go:31] will retry after 302.131416ms: missing components: kube-dns
	I1108 09:35:13.453968 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:13.454393 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:13.542205 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:13.686915 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:13.733391 1029992 system_pods.go:86] 19 kube-system pods found
	I1108 09:35:13.733439 1029992 system_pods.go:89] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:35:13.733448 1029992 system_pods.go:89] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:13.733486 1029992 system_pods.go:89] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:35:13.733501 1029992 system_pods.go:89] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:35:13.733506 1029992 system_pods.go:89] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:13.733512 1029992 system_pods.go:89] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:13.733524 1029992 system_pods.go:89] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:13.733529 1029992 system_pods.go:89] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:13.733552 1029992 system_pods.go:89] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:13.733566 1029992 system_pods.go:89] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:13.733572 1029992 system_pods.go:89] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:13.733591 1029992 system_pods.go:89] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:13.733602 1029992 system_pods.go:89] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending
	I1108 09:35:13.733611 1029992 system_pods.go:89] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:13.733635 1029992 system_pods.go:89] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:35:13.733649 1029992 system_pods.go:89] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:35:13.733657 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:13.733663 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:13.733675 1029992 system_pods.go:89] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:35:13.733693 1029992 retry.go:31] will retry after 356.856722ms: missing components: kube-dns
	I1108 09:35:13.919462 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:13.919589 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:14.036971 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:14.097040 1029992 system_pods.go:86] 19 kube-system pods found
	I1108 09:35:14.097142 1029992 system_pods.go:89] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:35:14.097169 1029992 system_pods.go:89] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:14.097201 1029992 system_pods.go:89] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:35:14.097229 1029992 system_pods.go:89] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:35:14.097257 1029992 system_pods.go:89] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:14.097296 1029992 system_pods.go:89] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:14.097320 1029992 system_pods.go:89] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:14.097345 1029992 system_pods.go:89] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:14.097380 1029992 system_pods.go:89] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:14.097404 1029992 system_pods.go:89] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:14.097430 1029992 system_pods.go:89] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:14.097466 1029992 system_pods.go:89] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:14.097494 1029992 system_pods.go:89] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:35:14.097528 1029992 system_pods.go:89] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:14.097561 1029992 system_pods.go:89] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:35:14.097593 1029992 system_pods.go:89] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:35:14.097633 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:14.097661 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:14.097687 1029992 system_pods.go:89] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Running
	I1108 09:35:14.097733 1029992 retry.go:31] will retry after 316.225073ms: missing components: kube-dns
	I1108 09:35:14.178799 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:14.404966 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:14.405378 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:14.422118 1029992 system_pods.go:86] 19 kube-system pods found
	I1108 09:35:14.422206 1029992 system_pods.go:89] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:35:14.422239 1029992 system_pods.go:89] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:14.422261 1029992 system_pods.go:89] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:35:14.422282 1029992 system_pods.go:89] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:35:14.422306 1029992 system_pods.go:89] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:14.422337 1029992 system_pods.go:89] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:14.422360 1029992 system_pods.go:89] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:14.422384 1029992 system_pods.go:89] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:14.422420 1029992 system_pods.go:89] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:14.422445 1029992 system_pods.go:89] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:14.422469 1029992 system_pods.go:89] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:14.422503 1029992 system_pods.go:89] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:14.422532 1029992 system_pods.go:89] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:35:14.422561 1029992 system_pods.go:89] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:14.422595 1029992 system_pods.go:89] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:35:14.422621 1029992 system_pods.go:89] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:35:14.422648 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:14.422688 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:14.422708 1029992 system_pods.go:89] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Running
	I1108 09:35:14.422738 1029992 retry.go:31] will retry after 596.782291ms: missing components: kube-dns
	I1108 09:35:14.534731 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:14.679612 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:14.917972 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:14.918366 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:15.049178 1029992 system_pods.go:86] 19 kube-system pods found
	I1108 09:35:15.049218 1029992 system_pods.go:89] "coredns-66bc5c9577-nljjg" [73885bcc-f793-4a9e-b9d4-3a74cfe6b1c2] Running
	I1108 09:35:15.049232 1029992 system_pods.go:89] "csi-hostpath-attacher-0" [a3665cac-688f-4f36-b3a0-1a0498071e87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:35:15.049264 1029992 system_pods.go:89] "csi-hostpath-resizer-0" [f40499d1-bd83-46df-b6b2-32d08920df2e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:35:15.049282 1029992 system_pods.go:89] "csi-hostpathplugin-dntzs" [43ae822c-04e7-4b65-8618-d67abfa4b472] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:35:15.049288 1029992 system_pods.go:89] "etcd-addons-517137" [73a6174e-c1e6-44ff-815d-4b5cb38ec663] Running
	I1108 09:35:15.049294 1029992 system_pods.go:89] "kindnet-c8b5h" [b38c23aa-0608-45ad-90c6-46799ff3b95a] Running
	I1108 09:35:15.049305 1029992 system_pods.go:89] "kube-apiserver-addons-517137" [f8b96b8b-0e30-448e-8f44-c6146d828684] Running
	I1108 09:35:15.049310 1029992 system_pods.go:89] "kube-controller-manager-addons-517137" [ab2d0b98-4566-47fe-b83e-75cf8ad7f9a7] Running
	I1108 09:35:15.049317 1029992 system_pods.go:89] "kube-ingress-dns-minikube" [c22c1475-077f-452e-b2e7-74809ca8f01b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:35:15.049326 1029992 system_pods.go:89] "kube-proxy-nb7h7" [b4096afc-dca3-41a9-bc2b-51aa81b43d90] Running
	I1108 09:35:15.049359 1029992 system_pods.go:89] "kube-scheduler-addons-517137" [3f35bf72-4453-4dce-bc21-df030a96811d] Running
	I1108 09:35:15.049374 1029992 system_pods.go:89] "metrics-server-85b7d694d7-pqhr4" [1ee63588-bcf7-4645-adae-3f2a433c05de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:35:15.049382 1029992 system_pods.go:89] "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:35:15.049392 1029992 system_pods.go:89] "registry-6b586f9694-hb7bs" [07bde6dd-79f9-4665-ae33-7d68ee454002] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:35:15.049404 1029992 system_pods.go:89] "registry-creds-764b6fb674-d4jk2" [15864f38-1975-41af-a124-d2add8a860bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:35:15.049415 1029992 system_pods.go:89] "registry-proxy-tgh4q" [d3e8e34a-6f29-474f-b733-ce54da95a473] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:35:15.049438 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txc5m" [f65ea898-7fcf-4933-a54c-38052b1afc12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:15.049453 1029992 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xvwnx" [6b076b32-96f2-4a1a-bccb-aed3abe9f4b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:35:15.049458 1029992 system_pods.go:89] "storage-provisioner" [ac61822e-0360-4ea0-9267-b8e9016e28b6] Running
	I1108 09:35:15.049482 1029992 system_pods.go:126] duration metric: took 1.652516878s to wait for k8s-apps to be running ...
	I1108 09:35:15.049496 1029992 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:35:15.049569 1029992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:35:15.051182 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:15.071395 1029992 system_svc.go:56] duration metric: took 21.88979ms WaitForService to wait for kubelet
	I1108 09:35:15.071425 1029992 kubeadm.go:587] duration metric: took 43.853367658s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:35:15.071443 1029992 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:35:15.083358 1029992 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 09:35:15.083394 1029992 node_conditions.go:123] node cpu capacity is 2
	I1108 09:35:15.083412 1029992 node_conditions.go:105] duration metric: took 11.960535ms to run NodePressure ...
	I1108 09:35:15.083451 1029992 start.go:242] waiting for startup goroutines ...
	I1108 09:35:15.179654 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:15.404809 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:15.405255 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:15.534432 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:15.679490 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:15.903339 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:15.904904 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:16.037316 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:16.179436 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:16.411488 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:16.411922 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:16.534337 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:16.685810 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:16.908611 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:16.909028 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:17.037398 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:17.179819 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:17.413020 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:17.413534 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:17.534930 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:17.678611 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:17.904965 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:17.905331 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:18.036141 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:18.178920 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:18.404657 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:18.409391 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:18.534826 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:18.679966 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:18.903907 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:18.904681 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:19.033898 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:19.178952 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:19.407790 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:19.410446 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:19.534740 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:19.679535 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:19.905672 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:19.906016 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:20.034120 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:20.179429 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:20.410320 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:20.410711 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:20.533868 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:20.679733 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:20.905362 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:20.905832 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:21.034058 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:21.178701 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:21.407759 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:21.408089 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:21.534506 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:21.679376 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:21.904938 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:21.905332 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:22.034994 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:22.178341 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:22.405050 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:22.405292 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:22.534961 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:22.678643 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:22.905112 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:22.905508 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:23.034049 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:23.178562 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:23.404417 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:23.404900 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:23.533990 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:23.678387 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:23.903231 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:23.903937 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:24.033784 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:24.181324 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:24.414312 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:24.419838 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:24.537005 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:24.680387 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:24.913228 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:24.913681 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:25.039697 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:25.182200 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:25.421053 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:25.421263 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:25.534818 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:25.679730 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:25.908082 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:25.908302 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:26.036690 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:26.181658 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:26.408512 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:26.408799 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:26.535935 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:26.678080 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:26.903703 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:26.905391 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:27.034866 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:27.180171 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:27.408024 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:27.408209 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:27.534294 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:27.679949 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:27.903054 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:27.904384 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:28.035672 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:28.185738 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:28.407179 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:28.407731 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:28.538181 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:28.680984 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:28.907183 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:28.907593 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:29.035578 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:29.186053 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:29.410668 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:29.411055 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:29.537137 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:29.679237 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:29.906024 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:29.906367 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:30.039083 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:30.179222 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:30.405922 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:30.408537 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:30.544846 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:30.699284 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:30.902645 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:30.904948 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:31.033842 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:31.182470 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:31.406487 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:31.406681 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:31.533772 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:31.679127 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:31.902602 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:31.905607 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:32.035592 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:32.178893 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:32.413801 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:32.417066 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:32.533988 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:32.678768 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:32.904688 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:32.906056 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:33.034861 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:33.180165 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:33.409210 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:33.411153 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:33.534487 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:33.680160 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:33.906411 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:33.906812 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:34.034909 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:34.180370 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:34.404300 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:34.404583 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:34.533943 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:34.679396 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:34.903486 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:34.903711 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:35.034760 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:35.178905 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:35.401959 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:35.404158 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:35.533956 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:35.678604 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:35.903074 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:35.904145 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:36.034886 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:36.179446 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:36.403268 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:36.413194 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:36.533766 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:36.679082 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:36.905362 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:36.905786 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:37.039240 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:37.179599 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:37.413727 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:37.413982 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:37.534116 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:37.678179 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:37.902624 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:37.904284 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:38.035007 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:38.179631 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:38.408821 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:38.409018 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:38.534673 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:38.679261 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:38.902948 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:38.905764 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:39.034534 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:39.179539 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:39.406411 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:39.407834 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:39.533670 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:39.679409 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:39.903924 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:39.905325 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:40.035173 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:40.178628 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:40.405371 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:40.405904 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:40.533929 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:40.679235 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:40.902329 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:40.904286 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:41.034961 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:41.179433 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:41.408178 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:41.408597 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:41.534269 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:41.678277 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:41.903763 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:41.904669 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:42.042354 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:42.201951 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:42.409343 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:42.409884 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:42.534314 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:42.679510 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:42.919366 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:42.919523 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:43.034740 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:43.179411 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:43.405365 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:43.405719 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:43.542090 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:43.681614 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:43.903880 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:43.905041 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:44.034701 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:44.179246 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:44.403102 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:44.412720 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:44.541275 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:44.681311 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:44.905185 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:44.905512 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:45.041314 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:45.179499 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:45.408875 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:45.409490 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:45.534537 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:45.679350 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:45.908067 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:45.910049 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:46.038278 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:46.178953 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:46.405496 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:46.406193 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:46.535066 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:46.679906 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:46.904065 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:46.905383 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:47.034539 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:47.178621 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:47.402545 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:47.404404 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:47.536426 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:47.678528 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:47.902666 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:47.904950 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:48.035959 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:48.179784 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:48.407280 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:48.407800 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:48.534117 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:48.678701 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:48.904083 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:48.905026 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:49.034179 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:49.178664 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:49.407266 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:49.407339 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:49.534205 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:49.678488 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:49.904071 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:49.905268 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:50.034552 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:50.179266 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:50.405061 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:50.405441 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:50.534284 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:50.678173 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:50.902954 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:50.904832 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:51.034200 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:51.178418 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:51.410287 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:51.412218 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:51.534534 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:51.679613 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:51.903750 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:51.903889 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:52.034609 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:52.179170 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:52.405961 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:52.406475 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:52.534928 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:52.679151 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:52.904211 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:52.905420 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:53.034302 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:53.178467 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:53.408606 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:53.409013 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:53.534891 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:53.678570 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:53.907147 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:35:53.907634 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:54.034688 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:54.179694 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:54.407297 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:54.407596 1029992 kapi.go:107] duration metric: took 1m17.006782407s to wait for kubernetes.io/minikube-addons=registry ...
	I1108 09:35:54.535387 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:54.679067 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:54.903755 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:55.035299 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:55.179352 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:55.406874 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:55.533783 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:55.679477 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:55.903113 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:56.034602 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:56.179762 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:56.403249 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:56.534125 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:56.679816 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:56.903442 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:57.034843 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:57.180097 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:57.411339 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:57.533967 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:57.679364 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:57.902735 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:58.033657 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:58.180213 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:58.402492 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:58.539895 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:58.679722 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:58.903794 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:59.034219 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:59.179295 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:59.410054 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:35:59.534510 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:35:59.678628 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:35:59.903413 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:00.097981 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:36:00.185999 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:00.441843 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:00.535879 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:36:00.679266 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:00.902567 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:01.035745 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:36:01.179271 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:01.413809 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:01.536296 1029992 kapi.go:107] duration metric: took 1m21.505595367s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1108 09:36:01.539606 1029992 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-517137 cluster.
	I1108 09:36:01.542547 1029992 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1108 09:36:01.545579 1029992 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1108 09:36:01.679008 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:01.903555 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:02.180799 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:02.403877 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:02.678882 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:02.903477 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:03.179521 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:03.407939 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:03.679104 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:03.903875 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:04.179581 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:04.412787 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:04.680901 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:04.902656 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:05.179834 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:05.405485 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:05.678769 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:05.903889 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:06.179347 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:06.411007 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:06.678263 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:06.902501 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:07.183197 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:07.403389 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:07.679123 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:07.903127 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:08.178314 1029992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:36:08.402906 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:08.678567 1029992 kapi.go:107] duration metric: took 1m31.003477389s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1108 09:36:08.903187 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:09.402936 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:09.902909 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:10.403677 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:10.902196 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:11.402984 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:11.902791 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:12.402574 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:12.902360 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:13.407895 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:13.902452 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:14.408910 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:14.902577 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:15.409017 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:15.902593 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:16.412524 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:16.903387 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:17.409750 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:17.902982 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:18.403094 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:18.902399 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:19.408238 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:19.902904 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:20.403625 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:20.902742 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:21.406469 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:21.902255 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:22.408649 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:22.903561 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:23.409799 1029992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:36:23.902931 1029992 kapi.go:107] duration metric: took 1m46.503885258s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1108 09:36:23.906106 1029992 out.go:179] * Enabled addons: inspektor-gadget, amd-gpu-device-plugin, default-storageclass, cloud-spanner, storage-provisioner-rancher, nvidia-device-plugin, ingress-dns, registry-creds, storage-provisioner, metrics-server, yakd, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1108 09:36:23.908988 1029992 addons.go:515] duration metric: took 1m52.690424936s for enable addons: enabled=[inspektor-gadget amd-gpu-device-plugin default-storageclass cloud-spanner storage-provisioner-rancher nvidia-device-plugin ingress-dns registry-creds storage-provisioner metrics-server yakd volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1108 09:36:23.909044 1029992 start.go:247] waiting for cluster config update ...
	I1108 09:36:23.909072 1029992 start.go:256] writing updated cluster config ...
	I1108 09:36:23.909369 1029992 ssh_runner.go:195] Run: rm -f paused
	I1108 09:36:23.913969 1029992 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:36:23.918488 1029992 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nljjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.924284 1029992 pod_ready.go:94] pod "coredns-66bc5c9577-nljjg" is "Ready"
	I1108 09:36:23.924367 1029992 pod_ready.go:86] duration metric: took 5.838111ms for pod "coredns-66bc5c9577-nljjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.926929 1029992 pod_ready.go:83] waiting for pod "etcd-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.931312 1029992 pod_ready.go:94] pod "etcd-addons-517137" is "Ready"
	I1108 09:36:23.931340 1029992 pod_ready.go:86] duration metric: took 4.382839ms for pod "etcd-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.933676 1029992 pod_ready.go:83] waiting for pod "kube-apiserver-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.938286 1029992 pod_ready.go:94] pod "kube-apiserver-addons-517137" is "Ready"
	I1108 09:36:23.938320 1029992 pod_ready.go:86] duration metric: took 4.616926ms for pod "kube-apiserver-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:23.941666 1029992 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:24.318988 1029992 pod_ready.go:94] pod "kube-controller-manager-addons-517137" is "Ready"
	I1108 09:36:24.319018 1029992 pod_ready.go:86] duration metric: took 377.326332ms for pod "kube-controller-manager-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:24.519553 1029992 pod_ready.go:83] waiting for pod "kube-proxy-nb7h7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:24.918788 1029992 pod_ready.go:94] pod "kube-proxy-nb7h7" is "Ready"
	I1108 09:36:24.918820 1029992 pod_ready.go:86] duration metric: took 399.237305ms for pod "kube-proxy-nb7h7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:25.119494 1029992 pod_ready.go:83] waiting for pod "kube-scheduler-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:25.518332 1029992 pod_ready.go:94] pod "kube-scheduler-addons-517137" is "Ready"
	I1108 09:36:25.518362 1029992 pod_ready.go:86] duration metric: took 398.840336ms for pod "kube-scheduler-addons-517137" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:36:25.518374 1029992 pod_ready.go:40] duration metric: took 1.604372108s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:36:25.590934 1029992 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 09:36:25.598992 1029992 out.go:179] * Done! kubectl is now configured to use "addons-517137" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:36:25 addons-517137 crio[832]: time="2025-11-08T09:36:25.478354946Z" level=info msg="Stopped pod sandbox (already stopped): c85c00988993743e784f5d24544908b59b21804ce0739947ed85ff858c109e73" id=b96ed720-e229-449e-a27f-c04bc6013c0d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:36:25 addons-517137 crio[832]: time="2025-11-08T09:36:25.478799914Z" level=info msg="Removing pod sandbox: c85c00988993743e784f5d24544908b59b21804ce0739947ed85ff858c109e73" id=7ae655f3-5759-461e-952d-6f577916e3b9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:36:25 addons-517137 crio[832]: time="2025-11-08T09:36:25.483086928Z" level=info msg="Removed pod sandbox: c85c00988993743e784f5d24544908b59b21804ce0739947ed85ff858c109e73" id=7ae655f3-5759-461e-952d-6f577916e3b9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.634726683Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8c1bdb3a-6f1e-41db-aba6-98fad0118403 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.634814361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.641481479Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6d8359e0ff240c51ea3ff1256f83d32188ea74779af692d492cf0d0913c08f6f UID:6c3a29de-9cda-45ba-93b1-4af4480dc1a0 NetNS:/var/run/netns/60d61312-cda1-4e2e-9ac7-49a360eda3aa Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400214ac60}] Aliases:map[]}"
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.641517835Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.654950685Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6d8359e0ff240c51ea3ff1256f83d32188ea74779af692d492cf0d0913c08f6f UID:6c3a29de-9cda-45ba-93b1-4af4480dc1a0 NetNS:/var/run/netns/60d61312-cda1-4e2e-9ac7-49a360eda3aa Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400214ac60}] Aliases:map[]}"
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.655094452Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.657861901Z" level=info msg="Ran pod sandbox 6d8359e0ff240c51ea3ff1256f83d32188ea74779af692d492cf0d0913c08f6f with infra container: default/busybox/POD" id=8c1bdb3a-6f1e-41db-aba6-98fad0118403 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.662062501Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cad55d5c-1ccf-49dc-9316-f7b3c25f6a78 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.662183605Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cad55d5c-1ccf-49dc-9316-f7b3c25f6a78 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.662219535Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cad55d5c-1ccf-49dc-9316-f7b3c25f6a78 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.663052735Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5b137522-6a77-484d-96e9-fc3f9ac9ce74 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:36:26 addons-517137 crio[832]: time="2025-11-08T09:36:26.664343949Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 09:36:28 addons-517137 crio[832]: time="2025-11-08T09:36:28.672167738Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5b137522-6a77-484d-96e9-fc3f9ac9ce74 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:36:28 addons-517137 crio[832]: time="2025-11-08T09:36:28.673117152Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aaf43c2b-c631-46d2-bc19-8de849cc0432 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:36:28 addons-517137 crio[832]: time="2025-11-08T09:36:28.675824229Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dcee9ebc-543b-4702-ba33-35304c258c84 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:36:28 addons-517137 crio[832]: time="2025-11-08T09:36:28.682113881Z" level=info msg="Creating container: default/busybox/busybox" id=c9528c40-688f-4030-93e2-bd2ccb19603b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:36:28 addons-517137 crio[832]: time="2025-11-08T09:36:28.68225649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:36:28 addons-517137 crio[832]: time="2025-11-08T09:36:28.689492126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:36:28 addons-517137 crio[832]: time="2025-11-08T09:36:28.690128646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:36:28 addons-517137 crio[832]: time="2025-11-08T09:36:28.705952177Z" level=info msg="Created container 68343ec31a80dfb15b3985a41687de63e7578b231201ce14d8a50cee52e5544a: default/busybox/busybox" id=c9528c40-688f-4030-93e2-bd2ccb19603b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:36:28 addons-517137 crio[832]: time="2025-11-08T09:36:28.706724776Z" level=info msg="Starting container: 68343ec31a80dfb15b3985a41687de63e7578b231201ce14d8a50cee52e5544a" id=5bc1c94f-7916-4b83-9884-4114c4b8ce39 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:36:28 addons-517137 crio[832]: time="2025-11-08T09:36:28.709483577Z" level=info msg="Started container" PID=5074 containerID=68343ec31a80dfb15b3985a41687de63e7578b231201ce14d8a50cee52e5544a description=default/busybox/busybox id=5bc1c94f-7916-4b83-9884-4114c4b8ce39 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d8359e0ff240c51ea3ff1256f83d32188ea74779af692d492cf0d0913c08f6f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	68343ec31a80d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   6d8359e0ff240       busybox                                    default
	41b018b3e05c6       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             14 seconds ago       Running             controller                               0                   d30459d61db3b       ingress-nginx-controller-6c8bf45fb-s4bsx   ingress-nginx
	98a7b26a816a4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          29 seconds ago       Running             csi-snapshotter                          0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	0315b8bbbc12a       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          31 seconds ago       Running             csi-provisioner                          0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	1c9aa88510d22       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            33 seconds ago       Running             liveness-probe                           0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	56d6d74a9465d       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           33 seconds ago       Running             hostpath                                 0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	2363b11b1cf45       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                35 seconds ago       Running             node-driver-registrar                    0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	c46777785ca95       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 36 seconds ago       Running             gcp-auth                                 0                   8aa686a9fe0f0       gcp-auth-78565c9fb4-fzmkf                  gcp-auth
	8169f675b4caa       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            40 seconds ago       Running             gadget                                   0                   b6560cdaafce3       gadget-gsfbw                               gadget
	0c171eb6d4b83       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             40 seconds ago       Exited              patch                                    2                   9dcc77417a820       ingress-nginx-admission-patch-h9qsg        ingress-nginx
	257b4b111a203       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             40 seconds ago       Exited              patch                                    3                   6a0ddb2294cf9       gcp-auth-certs-patch-dpcml                 gcp-auth
	edcad2f498f99       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              43 seconds ago       Running             registry-proxy                           0                   4f900f843eea3       registry-proxy-tgh4q                       kube-system
	51409c66bfa0c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   48 seconds ago       Running             csi-external-health-monitor-controller   0                   85a6e01405531       csi-hostpathplugin-dntzs                   kube-system
	acdbfbb4a8daa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   49 seconds ago       Exited              create                                   0                   85b1e8408475f       ingress-nginx-admission-create-5btdn       ingress-nginx
	69eae070b3513       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              50 seconds ago       Running             yakd                                     0                   563d4dfb0fb7f       yakd-dashboard-5ff678cb9-vqp5m             yakd-dashboard
	b3af115d2fc9a       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              54 seconds ago       Running             csi-resizer                              0                   bb34ebfa04aa3       csi-hostpath-resizer-0                     kube-system
	fb847fc15d16f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      55 seconds ago       Running             volume-snapshot-controller               0                   1cbfe7df28d37       snapshot-controller-7d9fbc56b8-xvwnx       kube-system
	d8846ff2d41c0       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     56 seconds ago       Running             nvidia-device-plugin-ctr                 0                   324395d3ab9f7       nvidia-device-plugin-daemonset-z6l4p       kube-system
	ef4c40782ee32       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   d3186b78538da       snapshot-controller-7d9fbc56b8-txc5m       kube-system
	0018ff01c56c1       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   9be212f7efd55       cloud-spanner-emulator-6f9fcf858b-l8bpm    default
	84e32df6b9a42       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   9d4f83a5dee0d       csi-hostpath-attacher-0                    kube-system
	081c18a6ec169       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   81cf742c3048d       metrics-server-85b7d694d7-pqhr4            kube-system
	f75f3152c1878       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   84292713e4a7c       registry-6b586f9694-hb7bs                  kube-system
	0ea29a01eb6c6       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   5b0ff61fbd99e       kube-ingress-dns-minikube                  kube-system
	3da17f7633a86       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   4414166947f9a       local-path-provisioner-648f6765c9-rcxpf    local-path-storage
	8e4aed6aef0dd       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   8651534f8bc4d       coredns-66bc5c9577-nljjg                   kube-system
	b3bfe6e8c2cd3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   e9a7987dde477       storage-provisioner                        kube-system
	1922bbd45f9e7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   0d6d223921262       kindnet-c8b5h                              kube-system
	1834bdc4c64d5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   6b97de2f178b9       kube-proxy-nb7h7                           kube-system
	eadcc549cf850       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   e668fbfea36b6       kube-scheduler-addons-517137               kube-system
	d5a319b8c02a6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   29d14f70e17c2       kube-controller-manager-addons-517137      kube-system
	e56a129d33cb1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   910188bfef9e4       etcd-addons-517137                         kube-system
	544f403d8cbc6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   4f167994af5e1       kube-apiserver-addons-517137               kube-system
	
	
	==> coredns [8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9] <==
	[INFO] 10.244.0.10:41177 - 39907 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000055629s
	[INFO] 10.244.0.10:41177 - 11756 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001879013s
	[INFO] 10.244.0.10:41177 - 43952 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002132364s
	[INFO] 10.244.0.10:41177 - 39124 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000119111s
	[INFO] 10.244.0.10:41177 - 5893 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000097663s
	[INFO] 10.244.0.10:50210 - 54953 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00017762s
	[INFO] 10.244.0.10:50210 - 54715 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088556s
	[INFO] 10.244.0.10:59189 - 19341 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.003610159s
	[INFO] 10.244.0.10:59189 - 19079 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.003161465s
	[INFO] 10.244.0.10:60021 - 14666 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126159s
	[INFO] 10.244.0.10:60021 - 14494 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068199s
	[INFO] 10.244.0.10:42484 - 53235 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003327499s
	[INFO] 10.244.0.10:42484 - 52774 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003011734s
	[INFO] 10.244.0.10:48937 - 39799 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000174929s
	[INFO] 10.244.0.10:48937 - 39963 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125445s
	[INFO] 10.244.0.20:55880 - 37508 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000390767s
	[INFO] 10.244.0.20:48625 - 42555 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142371s
	[INFO] 10.244.0.20:42827 - 36970 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000276818s
	[INFO] 10.244.0.20:39475 - 31757 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000526796s
	[INFO] 10.244.0.20:43685 - 65433 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015641s
	[INFO] 10.244.0.20:37741 - 3457 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152644s
	[INFO] 10.244.0.20:47036 - 38403 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002407934s
	[INFO] 10.244.0.20:36378 - 32091 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003028579s
	[INFO] 10.244.0.20:50914 - 25361 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001122102s
	[INFO] 10.244.0.20:57285 - 37934 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002396685s
	
	
	==> describe nodes <==
	Name:               addons-517137
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-517137
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=addons-517137
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_34_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-517137
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-517137"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:34:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-517137
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:36:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:36:37 +0000   Sat, 08 Nov 2025 09:34:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:36:37 +0000   Sat, 08 Nov 2025 09:34:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:36:37 +0000   Sat, 08 Nov 2025 09:34:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:36:37 +0000   Sat, 08 Nov 2025 09:35:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-517137
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                1502dec3-de48-4684-9a57-a6d5a07f5971
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-6f9fcf858b-l8bpm     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  gadget                      gadget-gsfbw                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  gcp-auth                    gcp-auth-78565c9fb4-fzmkf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-s4bsx    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m
	  kube-system                 coredns-66bc5c9577-nljjg                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m6s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 csi-hostpathplugin-dntzs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 etcd-addons-517137                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m12s
	  kube-system                 kindnet-c8b5h                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m6s
	  kube-system                 kube-apiserver-addons-517137                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-controller-manager-addons-517137       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-nb7h7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-scheduler-addons-517137                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 metrics-server-85b7d694d7-pqhr4             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m1s
	  kube-system                 nvidia-device-plugin-daemonset-z6l4p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 registry-6b586f9694-hb7bs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-creds-764b6fb674-d4jk2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 registry-proxy-tgh4q                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 snapshot-controller-7d9fbc56b8-txc5m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 snapshot-controller-7d9fbc56b8-xvwnx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  local-path-storage          local-path-provisioner-648f6765c9-rcxpf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vqp5m              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m5s                   kube-proxy       
	  Warning  CgroupV1                 2m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m19s (x9 over 2m19s)  kubelet          Node addons-517137 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node addons-517137 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s (x7 over 2m19s)  kubelet          Node addons-517137 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m12s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m12s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m12s                  kubelet          Node addons-517137 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m12s                  kubelet          Node addons-517137 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m12s                  kubelet          Node addons-517137 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m8s                   node-controller  Node addons-517137 event: Registered Node addons-517137 in Controller
	  Normal   NodeReady                85s                    kubelet          Node addons-517137 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:13] overlayfs: idmapped layers are currently not supported
	[ +27.402772] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:18] overlayfs: idmapped layers are currently not supported
	[  +7.306773] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:20] overlayfs: idmapped layers are currently not supported
	[ +10.554062] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:21] overlayfs: idmapped layers are currently not supported
	[ +13.395960] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:23] overlayfs: idmapped layers are currently not supported
	[ +14.098822] overlayfs: idmapped layers are currently not supported
	[ +16.951080] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:24] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:27] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:28] overlayfs: idmapped layers are currently not supported
	[ +11.539282] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:30] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:32] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 8 09:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e] <==
	{"level":"warn","ts":"2025-11-08T09:34:21.497284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.530554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.567002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.596674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.646080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.667874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.736600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.739722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.768615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.807055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.828191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.885720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.907684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.929754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.968067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:21.997819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:22.026251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:22.069067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:22.220512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:37.981037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:34:37.999158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:35:00.002156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:35:00.010006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:35:00.053606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:35:00.083729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51862","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [c46777785ca951ffc280809bd38c53e1ff0698ffbd62470b9fb12cda1e4e30a1] <==
	2025/11/08 09:36:01 GCP Auth Webhook started!
	2025/11/08 09:36:26 Ready to marshal response ...
	2025/11/08 09:36:26 Ready to write response ...
	2025/11/08 09:36:26 Ready to marshal response ...
	2025/11/08 09:36:26 Ready to write response ...
	2025/11/08 09:36:26 Ready to marshal response ...
	2025/11/08 09:36:26 Ready to write response ...
	
	
	==> kernel <==
	 09:36:38 up  8:19,  0 user,  load average: 3.40, 2.75, 2.75
	Linux addons-517137 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3] <==
	E1108 09:35:02.249602       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 09:35:02.249722       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 09:35:02.249805       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 09:35:02.249865       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1108 09:35:03.849508       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:35:03.849537       1 metrics.go:72] Registering metrics
	I1108 09:35:03.849733       1 controller.go:711] "Syncing nftables rules"
	I1108 09:35:12.248704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:35:12.248824       1 main.go:301] handling current node
	I1108 09:35:22.249598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:35:22.249688       1 main.go:301] handling current node
	I1108 09:35:32.248613       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:35:32.248676       1 main.go:301] handling current node
	I1108 09:35:42.249208       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:35:42.249240       1 main.go:301] handling current node
	I1108 09:35:52.249597       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:35:52.249629       1 main.go:301] handling current node
	I1108 09:36:02.249615       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:36:02.249676       1 main.go:301] handling current node
	I1108 09:36:12.249646       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:36:12.249685       1 main.go:301] handling current node
	I1108 09:36:22.252540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:36:22.252569       1 main.go:301] handling current node
	I1108 09:36:32.250205       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:36:32.250356       1 main.go:301] handling current node
	
	
	==> kube-apiserver [544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13] <==
	W1108 09:35:00.082954       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 09:35:12.891276       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.96.52:443: connect: connection refused
	E1108 09:35:12.891339       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.96.52:443: connect: connection refused" logger="UnhandledError"
	W1108 09:35:12.891857       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.96.52:443: connect: connection refused
	E1108 09:35:12.891892       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.96.52:443: connect: connection refused" logger="UnhandledError"
	W1108 09:35:12.970246       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.96.52:443: connect: connection refused
	E1108 09:35:12.971002       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.96.52:443: connect: connection refused" logger="UnhandledError"
	E1108 09:35:30.597242       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.211.77:443: connect: connection refused" logger="UnhandledError"
	W1108 09:35:30.597779       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 09:35:30.597891       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1108 09:35:30.598801       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.211.77:443: connect: connection refused" logger="UnhandledError"
	E1108 09:35:30.646895       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.101.211.77:443/apis/metrics.k8s.io/v1beta1: 403" logger="UnhandledError"
	W1108 09:35:30.646909       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 09:35:30.647406       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1108 09:35:30.688266       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 09:35:30.701207       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1108 09:36:35.553882       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38472: use of closed network connection
	E1108 09:36:35.821666       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38506: use of closed network connection
	E1108 09:36:35.949988       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38532: use of closed network connection
	
	
	==> kube-controller-manager [d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf] <==
	I1108 09:34:30.029073       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:34:30.029105       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:34:30.029137       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:34:30.024109       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 09:34:30.024129       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:34:30.024140       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:34:30.024169       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:34:30.024186       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:34:30.024313       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:34:30.027799       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:34:30.027832       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:34:30.034729       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:34:30.038142       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:34:30.052068       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-517137" podCIDRs=["10.244.0.0/24"]
	E1108 09:34:36.360070       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1108 09:34:59.983931       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 09:34:59.984085       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1108 09:34:59.984139       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1108 09:35:00.020255       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1108 09:35:00.030149       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1108 09:35:00.088565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:35:00.239979       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:35:15.001758       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1108 09:35:30.096164       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 09:35:30.257510       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5] <==
	I1108 09:34:31.827307       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:34:32.050136       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:34:32.151131       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:34:32.151167       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 09:34:32.151265       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:34:32.314149       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:34:32.314200       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:34:32.321438       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:34:32.321873       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:34:32.321889       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:34:32.323462       1 config.go:200] "Starting service config controller"
	I1108 09:34:32.323472       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:34:32.323487       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:34:32.323492       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:34:32.323508       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:34:32.323512       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:34:32.328282       1 config.go:309] "Starting node config controller"
	I1108 09:34:32.328304       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:34:32.328313       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:34:32.424990       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:34:32.425063       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:34:32.425338       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7] <==
	E1108 09:34:23.145187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:34:23.145248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:34:23.145310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:34:23.145537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1108 09:34:23.148324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:34:23.148459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:34:23.148621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:34:23.148719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:34:23.148798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:34:23.153195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:34:23.153324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:34:23.153367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:34:23.153449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:34:23.153504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:34:23.153543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:34:23.995767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1108 09:34:24.056733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:34:24.085876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:34:24.112220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:34:24.124765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:34:24.203540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:34:24.215769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:34:24.225596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:34:24.294033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1108 09:34:25.923905       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:35:58 addons-517137 kubelet[1286]: I1108 09:35:58.319216    1286 scope.go:117] "RemoveContainer" containerID="65adf85f741000e296817edeefbc94493e3ec5d9f895438b119db1afbd45d10c"
	Nov 08 09:35:58 addons-517137 kubelet[1286]: I1108 09:35:58.390887    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-gsfbw" podStartSLOduration=65.721062882 podStartE2EDuration="1m22.390866076s" podCreationTimestamp="2025-11-08 09:34:36 +0000 UTC" firstStartedPulling="2025-11-08 09:35:40.942106745 +0000 UTC m=+75.601510983" lastFinishedPulling="2025-11-08 09:35:57.611909939 +0000 UTC m=+92.271314177" observedRunningTime="2025-11-08 09:35:58.375720075 +0000 UTC m=+93.035124313" watchObservedRunningTime="2025-11-08 09:35:58.390866076 +0000 UTC m=+93.050270314"
	Nov 08 09:35:58 addons-517137 kubelet[1286]: I1108 09:35:58.418920    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7wkk\" (UniqueName: \"kubernetes.io/projected/b55667ef-bb87-4015-935a-73d7bf0be4f9-kube-api-access-m7wkk\") pod \"b55667ef-bb87-4015-935a-73d7bf0be4f9\" (UID: \"b55667ef-bb87-4015-935a-73d7bf0be4f9\") "
	Nov 08 09:35:58 addons-517137 kubelet[1286]: I1108 09:35:58.421329    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55667ef-bb87-4015-935a-73d7bf0be4f9-kube-api-access-m7wkk" (OuterVolumeSpecName: "kube-api-access-m7wkk") pod "b55667ef-bb87-4015-935a-73d7bf0be4f9" (UID: "b55667ef-bb87-4015-935a-73d7bf0be4f9"). InnerVolumeSpecName "kube-api-access-m7wkk". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 08 09:35:58 addons-517137 kubelet[1286]: I1108 09:35:58.519953    1286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m7wkk\" (UniqueName: \"kubernetes.io/projected/b55667ef-bb87-4015-935a-73d7bf0be4f9-kube-api-access-m7wkk\") on node \"addons-517137\" DevicePath \"\""
	Nov 08 09:35:59 addons-517137 kubelet[1286]: I1108 09:35:59.351592    1286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a0ddb2294cf90da42ccfb7b16956d09bd38b1e9837d7503469ab543c52b9f21"
	Nov 08 09:35:59 addons-517137 kubelet[1286]: I1108 09:35:59.942326    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76qpc\" (UniqueName: \"kubernetes.io/projected/4b2a615a-3398-47b5-89b2-3da77d4f73ec-kube-api-access-76qpc\") pod \"4b2a615a-3398-47b5-89b2-3da77d4f73ec\" (UID: \"4b2a615a-3398-47b5-89b2-3da77d4f73ec\") "
	Nov 08 09:35:59 addons-517137 kubelet[1286]: I1108 09:35:59.944748    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b2a615a-3398-47b5-89b2-3da77d4f73ec-kube-api-access-76qpc" (OuterVolumeSpecName: "kube-api-access-76qpc") pod "4b2a615a-3398-47b5-89b2-3da77d4f73ec" (UID: "4b2a615a-3398-47b5-89b2-3da77d4f73ec"). InnerVolumeSpecName "kube-api-access-76qpc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 08 09:36:00 addons-517137 kubelet[1286]: I1108 09:36:00.056907    1286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-76qpc\" (UniqueName: \"kubernetes.io/projected/4b2a615a-3398-47b5-89b2-3da77d4f73ec-kube-api-access-76qpc\") on node \"addons-517137\" DevicePath \"\""
	Nov 08 09:36:00 addons-517137 kubelet[1286]: I1108 09:36:00.397957    1286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dcc77417a820deee388871c898814e8a7527ed38e7c42e6dc238941f8d1a0a6"
	Nov 08 09:36:01 addons-517137 kubelet[1286]: I1108 09:36:01.429738    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-fzmkf" podStartSLOduration=68.685344764 podStartE2EDuration="1m22.429701562s" podCreationTimestamp="2025-11-08 09:34:39 +0000 UTC" firstStartedPulling="2025-11-08 09:35:47.501700058 +0000 UTC m=+82.161104288" lastFinishedPulling="2025-11-08 09:36:01.246056856 +0000 UTC m=+95.905461086" observedRunningTime="2025-11-08 09:36:01.427768618 +0000 UTC m=+96.087172864" watchObservedRunningTime="2025-11-08 09:36:01.429701562 +0000 UTC m=+96.089105792"
	Nov 08 09:36:04 addons-517137 kubelet[1286]: I1108 09:36:04.692208    1286 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 08 09:36:04 addons-517137 kubelet[1286]: I1108 09:36:04.692255    1286 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 08 09:36:14 addons-517137 kubelet[1286]: I1108 09:36:14.045860    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-dntzs" podStartSLOduration=8.248072787 podStartE2EDuration="1m2.045836933s" podCreationTimestamp="2025-11-08 09:35:12 +0000 UTC" firstStartedPulling="2025-11-08 09:35:13.933318855 +0000 UTC m=+48.592723085" lastFinishedPulling="2025-11-08 09:36:07.731083001 +0000 UTC m=+102.390487231" observedRunningTime="2025-11-08 09:36:08.507407667 +0000 UTC m=+103.166811914" watchObservedRunningTime="2025-11-08 09:36:14.045836933 +0000 UTC m=+108.705241171"
	Nov 08 09:36:15 addons-517137 kubelet[1286]: I1108 09:36:15.455209    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0cdf8f-c930-45a5-9600-862bdd319aaa" path="/var/lib/kubelet/pods/ae0cdf8f-c930-45a5-9600-862bdd319aaa/volumes"
	Nov 08 09:36:16 addons-517137 kubelet[1286]: E1108 09:36:16.810659    1286 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 08 09:36:16 addons-517137 kubelet[1286]: E1108 09:36:16.810748    1286 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15864f38-1975-41af-a124-d2add8a860bf-gcr-creds podName:15864f38-1975-41af-a124-d2add8a860bf nodeName:}" failed. No retries permitted until 2025-11-08 09:37:20.810731154 +0000 UTC m=+175.470135392 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/15864f38-1975-41af-a124-d2add8a860bf-gcr-creds") pod "registry-creds-764b6fb674-d4jk2" (UID: "15864f38-1975-41af-a124-d2add8a860bf") : secret "registry-creds-gcr" not found
	Nov 08 09:36:25 addons-517137 kubelet[1286]: I1108 09:36:25.462699    1286 scope.go:117] "RemoveContainer" containerID="eaabc7f2723fc7a4401b301e9c632ad1467c3860b61ed055058dc0cf8b0de7f3"
	Nov 08 09:36:25 addons-517137 kubelet[1286]: E1108 09:36:25.653199    1286 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/47d075c169d3cf9bb8779cd0a1c4e1f680eecf8951cbb4c5179f06f6c07c766d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/47d075c169d3cf9bb8779cd0a1c4e1f680eecf8951cbb4c5179f06f6c07c766d/diff: no such file or directory, extraDiskErr: <nil>
	Nov 08 09:36:25 addons-517137 kubelet[1286]: E1108 09:36:25.698271    1286 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5f38c220a9f4bd0eba1267ac2202cf7bbe243c219c2d0b8308e3008731c35121/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5f38c220a9f4bd0eba1267ac2202cf7bbe243c219c2d0b8308e3008731c35121/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-dpcml_b55667ef-bb87-4015-935a-73d7bf0be4f9/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-dpcml_b55667ef-bb87-4015-935a-73d7bf0be4f9/patch/1.log: no such file or directory
	Nov 08 09:36:26 addons-517137 kubelet[1286]: I1108 09:36:26.323677    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-s4bsx" podStartSLOduration=103.412269968 podStartE2EDuration="1m49.323656384s" podCreationTimestamp="2025-11-08 09:34:37 +0000 UTC" firstStartedPulling="2025-11-08 09:36:17.182547228 +0000 UTC m=+111.841951458" lastFinishedPulling="2025-11-08 09:36:23.093933644 +0000 UTC m=+117.753337874" observedRunningTime="2025-11-08 09:36:23.572063572 +0000 UTC m=+118.231467802" watchObservedRunningTime="2025-11-08 09:36:26.323656384 +0000 UTC m=+120.983060614"
	Nov 08 09:36:26 addons-517137 kubelet[1286]: I1108 09:36:26.397962    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6c3a29de-9cda-45ba-93b1-4af4480dc1a0-gcp-creds\") pod \"busybox\" (UID: \"6c3a29de-9cda-45ba-93b1-4af4480dc1a0\") " pod="default/busybox"
	Nov 08 09:36:26 addons-517137 kubelet[1286]: I1108 09:36:26.398211    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbbd\" (UniqueName: \"kubernetes.io/projected/6c3a29de-9cda-45ba-93b1-4af4480dc1a0-kube-api-access-sfbbd\") pod \"busybox\" (UID: \"6c3a29de-9cda-45ba-93b1-4af4480dc1a0\") " pod="default/busybox"
	Nov 08 09:36:29 addons-517137 kubelet[1286]: I1108 09:36:29.455515    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b55667ef-bb87-4015-935a-73d7bf0be4f9" path="/var/lib/kubelet/pods/b55667ef-bb87-4015-935a-73d7bf0be4f9/volumes"
	Nov 08 09:36:35 addons-517137 kubelet[1286]: I1108 09:36:35.585224    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=7.573710811 podStartE2EDuration="9.585207518s" podCreationTimestamp="2025-11-08 09:36:26 +0000 UTC" firstStartedPulling="2025-11-08 09:36:26.662449485 +0000 UTC m=+121.321853715" lastFinishedPulling="2025-11-08 09:36:28.673946192 +0000 UTC m=+123.333350422" observedRunningTime="2025-11-08 09:36:29.59044229 +0000 UTC m=+124.249846528" watchObservedRunningTime="2025-11-08 09:36:35.585207518 +0000 UTC m=+130.244611756"
	
	
	==> storage-provisioner [b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc] <==
	W1108 09:36:12.265480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:14.269001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:14.276061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:16.279331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:16.287114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:18.291489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:18.300649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:20.303651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:20.308908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:22.312986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:22.318934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:24.321809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:24.328810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:26.334360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:26.339610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:28.342968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:28.347728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:30.350865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:30.355850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:32.359274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:32.366003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:34.369326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:34.373750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:36.376401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:36:36.380917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-517137 -n addons-517137
helpers_test.go:269: (dbg) Run:  kubectl --context addons-517137 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-5btdn ingress-nginx-admission-patch-h9qsg registry-creds-764b6fb674-d4jk2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-517137 describe pod ingress-nginx-admission-create-5btdn ingress-nginx-admission-patch-h9qsg registry-creds-764b6fb674-d4jk2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-517137 describe pod ingress-nginx-admission-create-5btdn ingress-nginx-admission-patch-h9qsg registry-creds-764b6fb674-d4jk2: exit status 1 (79.652345ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5btdn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h9qsg" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-d4jk2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-517137 describe pod ingress-nginx-admission-create-5btdn ingress-nginx-admission-patch-h9qsg registry-creds-764b6fb674-d4jk2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable headlamp --alsologtostderr -v=1: exit status 11 (264.72328ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:36:39.149265 1036667 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:36:39.150167 1036667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:36:39.150210 1036667 out.go:374] Setting ErrFile to fd 2...
	I1108 09:36:39.150230 1036667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:36:39.150539 1036667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:36:39.150860 1036667 mustload.go:66] Loading cluster: addons-517137
	I1108 09:36:39.151311 1036667 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:36:39.151354 1036667 addons.go:607] checking whether the cluster is paused
	I1108 09:36:39.151482 1036667 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:36:39.151516 1036667 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:36:39.152008 1036667 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:36:39.170042 1036667 ssh_runner.go:195] Run: systemctl --version
	I1108 09:36:39.170130 1036667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:36:39.187794 1036667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:36:39.295221 1036667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:36:39.295314 1036667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:36:39.327720 1036667 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:36:39.327744 1036667 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:36:39.327749 1036667 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:36:39.327770 1036667 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:36:39.327792 1036667 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:36:39.327802 1036667 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:36:39.327806 1036667 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:36:39.327810 1036667 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:36:39.327813 1036667 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:36:39.327820 1036667 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:36:39.327829 1036667 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:36:39.327833 1036667 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:36:39.327836 1036667 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:36:39.327840 1036667 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:36:39.327843 1036667 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:36:39.327848 1036667 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:36:39.327866 1036667 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:36:39.327877 1036667 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:36:39.327881 1036667 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:36:39.327885 1036667 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:36:39.327895 1036667 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:36:39.327899 1036667 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:36:39.327902 1036667 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:36:39.327905 1036667 cri.go:89] found id: ""
	I1108 09:36:39.327971 1036667 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:36:39.343522 1036667 out.go:203] 
	W1108 09:36:39.346562 1036667 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:36:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:36:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:36:39.346592 1036667 out.go:285] * 
	* 
	W1108 09:36:39.354817 1036667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:36:39.357843 1036667 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-l8bpm" [9734810f-a239-4c8f-8995-a13dc8940269] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003657531s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (285.538079ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:37:51.351806 1038548 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:37:51.353307 1038548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:51.353362 1038548 out.go:374] Setting ErrFile to fd 2...
	I1108 09:37:51.353383 1038548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:51.353694 1038548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:37:51.354484 1038548 mustload.go:66] Loading cluster: addons-517137
	I1108 09:37:51.355021 1038548 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:51.355070 1038548 addons.go:607] checking whether the cluster is paused
	I1108 09:37:51.355218 1038548 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:51.355259 1038548 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:37:51.355754 1038548 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:37:51.373316 1038548 ssh_runner.go:195] Run: systemctl --version
	I1108 09:37:51.373380 1038548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:37:51.392591 1038548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:37:51.507176 1038548 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:37:51.507313 1038548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:37:51.548076 1038548 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:37:51.548140 1038548 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:37:51.548167 1038548 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:37:51.548186 1038548 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:37:51.548218 1038548 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:37:51.548240 1038548 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:37:51.548262 1038548 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:37:51.548283 1038548 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:37:51.548317 1038548 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:37:51.548344 1038548 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:37:51.548369 1038548 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:37:51.548388 1038548 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:37:51.548406 1038548 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:37:51.548462 1038548 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:37:51.548482 1038548 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:37:51.548507 1038548 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:37:51.548546 1038548 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:37:51.548572 1038548 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:37:51.548593 1038548 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:37:51.548615 1038548 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:37:51.548657 1038548 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:37:51.548680 1038548 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:37:51.548699 1038548 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:37:51.548720 1038548 cri.go:89] found id: ""
	I1108 09:37:51.548802 1038548 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:37:51.564853 1038548 out.go:203] 
	W1108 09:37:51.567929 1038548 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:37:51.567966 1038548 out.go:285] * 
	* 
	W1108 09:37:51.576194 1038548 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:37:51.579339 1038548 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-517137 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-517137 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-517137 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [49af1fbc-3746-4ab4-846c-3a05f0c7efe9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [49af1fbc-3746-4ab4-846c-3a05f0c7efe9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [49af1fbc-3746-4ab4-846c-3a05f0c7efe9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003437351s
addons_test.go:967: (dbg) Run:  kubectl --context addons-517137 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 ssh "cat /opt/local-path-provisioner/pvc-975b142a-cf8a-4ec0-aa0b-29691c63b381_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-517137 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-517137 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (399.882269ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:37:44.933287 1038432 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:37:44.934686 1038432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:44.934703 1038432 out.go:374] Setting ErrFile to fd 2...
	I1108 09:37:44.934710 1038432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:44.935013 1038432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:37:44.935339 1038432 mustload.go:66] Loading cluster: addons-517137
	I1108 09:37:44.935712 1038432 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:44.935731 1038432 addons.go:607] checking whether the cluster is paused
	I1108 09:37:44.935876 1038432 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:44.935895 1038432 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:37:44.936529 1038432 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:37:44.956271 1038432 ssh_runner.go:195] Run: systemctl --version
	I1108 09:37:44.956335 1038432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:37:44.974683 1038432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:37:45.125969 1038432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:37:45.126080 1038432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:37:45.210530 1038432 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:37:45.210558 1038432 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:37:45.210564 1038432 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:37:45.210568 1038432 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:37:45.210571 1038432 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:37:45.210576 1038432 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:37:45.210579 1038432 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:37:45.210584 1038432 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:37:45.210587 1038432 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:37:45.210596 1038432 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:37:45.210600 1038432 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:37:45.210603 1038432 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:37:45.210607 1038432 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:37:45.210611 1038432 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:37:45.210614 1038432 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:37:45.210625 1038432 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:37:45.210633 1038432 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:37:45.210639 1038432 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:37:45.210642 1038432 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:37:45.210646 1038432 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:37:45.210651 1038432 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:37:45.210655 1038432 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:37:45.210659 1038432 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:37:45.210662 1038432 cri.go:89] found id: ""
	I1108 09:37:45.210721 1038432 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:37:45.248835 1038432 out.go:203] 
	W1108 09:37:45.259497 1038432 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:37:45.259536 1038432 out.go:285] * 
	* 
	W1108 09:37:45.272497 1038432 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:37:45.276109 1038432 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-z6l4p" [f30708d3-ce41-4098-91b2-ace24853a849] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003452391s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (271.317475ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:37:30.302686 1038069 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:37:30.303420 1038069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:30.303436 1038069 out.go:374] Setting ErrFile to fd 2...
	I1108 09:37:30.303443 1038069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:30.303714 1038069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:37:30.303992 1038069 mustload.go:66] Loading cluster: addons-517137
	I1108 09:37:30.304354 1038069 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:30.304371 1038069 addons.go:607] checking whether the cluster is paused
	I1108 09:37:30.304520 1038069 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:30.304537 1038069 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:37:30.304980 1038069 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:37:30.324745 1038069 ssh_runner.go:195] Run: systemctl --version
	I1108 09:37:30.324847 1038069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:37:30.343870 1038069 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:37:30.450911 1038069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:37:30.451270 1038069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:37:30.483975 1038069 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:37:30.484042 1038069 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:37:30.484062 1038069 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:37:30.484084 1038069 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:37:30.484120 1038069 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:37:30.484148 1038069 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:37:30.484171 1038069 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:37:30.484194 1038069 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:37:30.484230 1038069 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:37:30.484260 1038069 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:37:30.484281 1038069 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:37:30.484304 1038069 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:37:30.484336 1038069 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:37:30.484360 1038069 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:37:30.484382 1038069 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:37:30.484408 1038069 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:37:30.484474 1038069 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:37:30.484502 1038069 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:37:30.484522 1038069 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:37:30.484544 1038069 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:37:30.484562 1038069 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:37:30.484579 1038069 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:37:30.484584 1038069 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:37:30.484589 1038069 cri.go:89] found id: ""
	I1108 09:37:30.484657 1038069 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:37:30.499857 1038069 out.go:203] 
	W1108 09:37:30.502754 1038069 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:37:30.502779 1038069 out.go:285] * 
	* 
	W1108 09:37:30.511418 1038069 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:37:30.514576 1038069 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-vqp5m" [0554ee7c-a455-43d0-acc9-b3945ab92880] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003227406s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-517137 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-517137 addons disable yakd --alsologtostderr -v=1: exit status 11 (266.923513ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:37:36.587883 1038142 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:37:36.589447 1038142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:36.589473 1038142 out.go:374] Setting ErrFile to fd 2...
	I1108 09:37:36.589480 1038142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:37:36.589774 1038142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:37:36.590122 1038142 mustload.go:66] Loading cluster: addons-517137
	I1108 09:37:36.590525 1038142 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:36.590545 1038142 addons.go:607] checking whether the cluster is paused
	I1108 09:37:36.590651 1038142 config.go:182] Loaded profile config "addons-517137": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:37:36.590667 1038142 host.go:66] Checking if "addons-517137" exists ...
	I1108 09:37:36.591138 1038142 cli_runner.go:164] Run: docker container inspect addons-517137 --format={{.State.Status}}
	I1108 09:37:36.608867 1038142 ssh_runner.go:195] Run: systemctl --version
	I1108 09:37:36.608931 1038142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-517137
	I1108 09:37:36.626219 1038142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/addons-517137/id_rsa Username:docker}
	I1108 09:37:36.730773 1038142 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:37:36.730856 1038142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:37:36.759395 1038142 cri.go:89] found id: "98a7b26a816a4608ff4be9cd241a3fd59d813b1e15775f748dc8d3d1b9e7b452"
	I1108 09:37:36.759420 1038142 cri.go:89] found id: "0315b8bbbc12adc384eae6e7618ff852717365b2422649a3d448ce0eac9f2b19"
	I1108 09:37:36.759426 1038142 cri.go:89] found id: "1c9aa88510d22ba5ae8b41116c883bd23e1dac87c834fd8174f77c83a78660d6"
	I1108 09:37:36.759430 1038142 cri.go:89] found id: "56d6d74a9465db238b4eb44e81815f6f653b3284c69c4a077e19b999e19a22e8"
	I1108 09:37:36.759433 1038142 cri.go:89] found id: "2363b11b1cf45312964a891229f29687d25af01165e0a77a7c96dc3222d69d67"
	I1108 09:37:36.759437 1038142 cri.go:89] found id: "edcad2f498f99f16873aab6bab5fec47d14ad3d053881312c3d06c87c7364d15"
	I1108 09:37:36.759440 1038142 cri.go:89] found id: "51409c66bfa0c983ec02fc4909934d84c9b55cf8680032444c068550e7f508fc"
	I1108 09:37:36.759443 1038142 cri.go:89] found id: "b3af115d2fc9a7dd6625728038f692fd8ed96b0be9e714f54808a0fce9c5a36e"
	I1108 09:37:36.759451 1038142 cri.go:89] found id: "fb847fc15d16ff72f6c5a7786965cf38b16fa4f860f9871c7d1c7a889e9d5c96"
	I1108 09:37:36.759462 1038142 cri.go:89] found id: "d8846ff2d41c03ebccab4a7b3342447166376bbb21108a68441cc7c3ac769bd1"
	I1108 09:37:36.759471 1038142 cri.go:89] found id: "ef4c40782ee32eb8b01a6da19c9e3a700f9fcf6908d3ccd2a61d11d4cd9dd93c"
	I1108 09:37:36.759474 1038142 cri.go:89] found id: "84e32df6b9a42331bae6a2471524bc39a81cf66dfb9e341943a0f5de80388170"
	I1108 09:37:36.759477 1038142 cri.go:89] found id: "081c18a6ec16976ee53a9b5661412d5488312f6329955bbbf2f4e9de8adc8bad"
	I1108 09:37:36.759481 1038142 cri.go:89] found id: "f75f3152c18780a50012470f95444199272e03106f9b79b7cc19efae7c925621"
	I1108 09:37:36.759484 1038142 cri.go:89] found id: "0ea29a01eb6c6040cf1757a1549cb9eeab15895c583844cf1378821e58a45dc9"
	I1108 09:37:36.759493 1038142 cri.go:89] found id: "8e4aed6aef0dd9722fc579b082d1c6320463ef66021ef0a3f1ccc8f245fec0f9"
	I1108 09:37:36.759505 1038142 cri.go:89] found id: "b3bfe6e8c2cd38ed2c163c76e41dcc111845eae8bac9967aac8cb607c54274fc"
	I1108 09:37:36.759510 1038142 cri.go:89] found id: "1922bbd45f9e7545da0676b3435d9d1325ab36f0e3d1c118ff7d940a9a2dbec3"
	I1108 09:37:36.759513 1038142 cri.go:89] found id: "1834bdc4c64d565bdb6c1fe05d22cb73e027fcbe9906680f46e58dfb0889e6a5"
	I1108 09:37:36.759516 1038142 cri.go:89] found id: "eadcc549cf850609455d235784802914341f69a99cbc9dbbcf39767bbb24dae7"
	I1108 09:37:36.759520 1038142 cri.go:89] found id: "d5a319b8c02a626dbf26ff33bccdee634b9f8e4bb018877b20795b5d9a736bcf"
	I1108 09:37:36.759524 1038142 cri.go:89] found id: "e56a129d33cb1da19543cb0d3d70da459e0d1f38c6c7e53b2d2e40001b01047e"
	I1108 09:37:36.759527 1038142 cri.go:89] found id: "544f403d8cbc622db23df6adc3209ec012399640f59ce584b9b4c63b093dbb13"
	I1108 09:37:36.759530 1038142 cri.go:89] found id: ""
	I1108 09:37:36.759586 1038142 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:37:36.774645 1038142 out.go:203] 
	W1108 09:37:36.777473 1038142 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:37:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:37:36.777500 1038142 out.go:285] * 
	* 
	W1108 09:37:36.785546 1038142 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:37:36.788482 1038142 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-517137 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-386623 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-386623 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-xpfdf" [09a4afad-157d-4b1b-8315-1637069d83be] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386623 -n functional-386623
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-08 09:53:46.127617489 +0000 UTC m=+1225.977848558
functional_test.go:1645: (dbg) Run:  kubectl --context functional-386623 describe po hello-node-connect-7d85dfc575-xpfdf -n default
functional_test.go:1645: (dbg) kubectl --context functional-386623 describe po hello-node-connect-7d85dfc575-xpfdf -n default:
Name:             hello-node-connect-7d85dfc575-xpfdf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-386623/192.168.49.2
Start Time:       Sat, 08 Nov 2025 09:43:45 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qz5qp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qz5qp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xpfdf to functional-386623
Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-386623 logs hello-node-connect-7d85dfc575-xpfdf -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-386623 logs hello-node-connect-7d85dfc575-xpfdf -n default: exit status 1 (100.585521ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-xpfdf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-386623 logs hello-node-connect-7d85dfc575-xpfdf -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-386623 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-xpfdf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-386623/192.168.49.2
Start Time:       Sat, 08 Nov 2025 09:43:45 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qz5qp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qz5qp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xpfdf to functional-386623
Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-386623 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-386623 logs -l app=hello-node-connect: exit status 1 (99.248045ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-xpfdf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-386623 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-386623 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.223.199
IPs:                      10.98.223.199
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32525/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386623
helpers_test.go:243: (dbg) docker inspect functional-386623:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "322b821502df10bf61cb69ffd2d5bcc5d0582be3cf8d17bef43e18b919c20e7b",
	        "Created": "2025-11-08T09:40:45.762078142Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1044918,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:40:45.831734576Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/322b821502df10bf61cb69ffd2d5bcc5d0582be3cf8d17bef43e18b919c20e7b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/322b821502df10bf61cb69ffd2d5bcc5d0582be3cf8d17bef43e18b919c20e7b/hostname",
	        "HostsPath": "/var/lib/docker/containers/322b821502df10bf61cb69ffd2d5bcc5d0582be3cf8d17bef43e18b919c20e7b/hosts",
	        "LogPath": "/var/lib/docker/containers/322b821502df10bf61cb69ffd2d5bcc5d0582be3cf8d17bef43e18b919c20e7b/322b821502df10bf61cb69ffd2d5bcc5d0582be3cf8d17bef43e18b919c20e7b-json.log",
	        "Name": "/functional-386623",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-386623:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386623",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "322b821502df10bf61cb69ffd2d5bcc5d0582be3cf8d17bef43e18b919c20e7b",
	                "LowerDir": "/var/lib/docker/overlay2/6474a0ca780689dfb39dfdb20d02ec756963dd5eab3642b19372b64152ee4466-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6474a0ca780689dfb39dfdb20d02ec756963dd5eab3642b19372b64152ee4466/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6474a0ca780689dfb39dfdb20d02ec756963dd5eab3642b19372b64152ee4466/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6474a0ca780689dfb39dfdb20d02ec756963dd5eab3642b19372b64152ee4466/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386623",
	                "Source": "/var/lib/docker/volumes/functional-386623/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386623",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386623",
	                "name.minikube.sigs.k8s.io": "functional-386623",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "00c7e4548dc530048c1335831fddeab9ccfabc6e514592444619caf8028bbf07",
	            "SandboxKey": "/var/run/docker/netns/00c7e4548dc5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34235"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34236"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34239"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34237"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34238"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386623": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:a6:d1:6c:ef:1b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09a21f9bef3dd8901adc066194a04a90a4339fc9ff6923c6b4d6ac37a02d5d53",
	                    "EndpointID": "7436b76a314dda74daf63ec90f6fd3d2801f279cf31926dc21cd46fa2741b7a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386623",
	                        "322b821502df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386623 -n functional-386623
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-386623 logs -n 25: (1.454123874s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-386623 ssh cat /etc/hostname                                                                                           │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ ssh     │ functional-386623 ssh -n functional-386623 sudo cat /tmp/does/not/exist/cp-test.txt                                               │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ ssh     │ functional-386623 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │                     │
	│ mount   │ -p functional-386623 /tmp/TestFunctionalparallelMountCmdany-port3657448542/001:/mount-9p --alsologtostderr -v=1                   │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │                     │
	│ ssh     │ functional-386623 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ ssh     │ functional-386623 ssh -- ls -la /mount-9p                                                                                         │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ ssh     │ functional-386623 ssh cat /mount-9p/test-1762595013058879670                                                                      │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ ssh     │ functional-386623 ssh stat /mount-9p/created-by-test                                                                              │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ ssh     │ functional-386623 ssh stat /mount-9p/created-by-pod                                                                               │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ ssh     │ functional-386623 ssh sudo umount -f /mount-9p                                                                                    │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ mount   │ -p functional-386623 /tmp/TestFunctionalparallelMountCmdspecific-port1273805518/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │                     │
	│ ssh     │ functional-386623 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │                     │
	│ ssh     │ functional-386623 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ ssh     │ functional-386623 ssh -- ls -la /mount-9p                                                                                         │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ ssh     │ functional-386623 ssh sudo umount -f /mount-9p                                                                                    │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │                     │
	│ mount   │ -p functional-386623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1378303404/001:/mount3 --alsologtostderr -v=1                │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │                     │
	│ mount   │ -p functional-386623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1378303404/001:/mount1 --alsologtostderr -v=1                │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │                     │
	│ mount   │ -p functional-386623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1378303404/001:/mount2 --alsologtostderr -v=1                │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │                     │
	│ ssh     │ functional-386623 ssh findmnt -T /mount1                                                                                          │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │                     │
	│ ssh     │ functional-386623 ssh findmnt -T /mount1                                                                                          │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ ssh     │ functional-386623 ssh findmnt -T /mount2                                                                                          │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ ssh     │ functional-386623 ssh findmnt -T /mount3                                                                                          │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ mount   │ -p functional-386623 --kill=true                                                                                                  │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │                     │
	│ addons  │ functional-386623 addons list                                                                                                     │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	│ addons  │ functional-386623 addons list -o json                                                                                             │ functional-386623 │ jenkins │ v1.37.0 │ 08 Nov 25 09:43 UTC │ 08 Nov 25 09:43 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:42:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:42:44.875007 1049131 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:42:44.875180 1049131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:42:44.875184 1049131 out.go:374] Setting ErrFile to fd 2...
	I1108 09:42:44.875187 1049131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:42:44.875458 1049131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:42:44.875823 1049131 out.go:368] Setting JSON to false
	I1108 09:42:44.876765 1049131 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30310,"bootTime":1762564655,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 09:42:44.876824 1049131 start.go:143] virtualization:  
	I1108 09:42:44.880228 1049131 out.go:179] * [functional-386623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 09:42:44.884178 1049131 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:42:44.884280 1049131 notify.go:221] Checking for updates...
	I1108 09:42:44.889903 1049131 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:42:44.892922 1049131 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 09:42:44.895915 1049131 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 09:42:44.899013 1049131 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 09:42:44.901875 1049131 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:42:44.905231 1049131 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:42:44.905333 1049131 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:42:44.933059 1049131 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:42:44.933168 1049131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:42:44.997849 1049131 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-08 09:42:44.987741975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:42:44.997949 1049131 docker.go:319] overlay module found
	I1108 09:42:45.001092 1049131 out.go:179] * Using the docker driver based on existing profile
	I1108 09:42:45.003899 1049131 start.go:309] selected driver: docker
	I1108 09:42:45.003909 1049131 start.go:930] validating driver "docker" against &{Name:functional-386623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-386623 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:42:45.003998 1049131 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:42:45.004125 1049131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:42:45.092731 1049131 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-08 09:42:45.076728005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:42:45.093224 1049131 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:42:45.093255 1049131 cni.go:84] Creating CNI manager for ""
	I1108 09:42:45.093312 1049131 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:42:45.093356 1049131 start.go:353] cluster config:
	{Name:functional-386623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-386623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:42:45.096846 1049131 out.go:179] * Starting "functional-386623" primary control-plane node in "functional-386623" cluster
	I1108 09:42:45.099798 1049131 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:42:45.102867 1049131 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:42:45.105902 1049131 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:42:45.106174 1049131 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:42:45.106210 1049131 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 09:42:45.106218 1049131 cache.go:59] Caching tarball of preloaded images
	I1108 09:42:45.106298 1049131 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 09:42:45.106315 1049131 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:42:45.106452 1049131 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/config.json ...
	I1108 09:42:45.149402 1049131 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:42:45.149415 1049131 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:42:45.149436 1049131 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:42:45.149461 1049131 start.go:360] acquireMachinesLock for functional-386623: {Name:mke6b7c8a1f52a85c7e6d7d2aa3507f1dc5dbb39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:42:45.149546 1049131 start.go:364] duration metric: took 61.57µs to acquireMachinesLock for "functional-386623"
	I1108 09:42:45.149568 1049131 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:42:45.149573 1049131 fix.go:54] fixHost starting: 
	I1108 09:42:45.149841 1049131 cli_runner.go:164] Run: docker container inspect functional-386623 --format={{.State.Status}}
	I1108 09:42:45.171268 1049131 fix.go:112] recreateIfNeeded on functional-386623: state=Running err=<nil>
	W1108 09:42:45.171298 1049131 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:42:45.174890 1049131 out.go:252] * Updating the running docker "functional-386623" container ...
	I1108 09:42:45.174946 1049131 machine.go:94] provisionDockerMachine start ...
	I1108 09:42:45.175047 1049131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:42:45.197632 1049131 main.go:143] libmachine: Using SSH client type: native
	I1108 09:42:45.197989 1049131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34235 <nil> <nil>}
	I1108 09:42:45.197997 1049131 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:42:45.397517 1049131 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386623
	
	I1108 09:42:45.397530 1049131 ubuntu.go:182] provisioning hostname "functional-386623"
	I1108 09:42:45.397594 1049131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:42:45.421613 1049131 main.go:143] libmachine: Using SSH client type: native
	I1108 09:42:45.421926 1049131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34235 <nil> <nil>}
	I1108 09:42:45.421936 1049131 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-386623 && echo "functional-386623" | sudo tee /etc/hostname
	I1108 09:42:45.586463 1049131 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386623
	
	I1108 09:42:45.586549 1049131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:42:45.607173 1049131 main.go:143] libmachine: Using SSH client type: native
	I1108 09:42:45.607472 1049131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34235 <nil> <nil>}
	I1108 09:42:45.607487 1049131 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-386623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386623/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-386623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:42:45.756915 1049131 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:42:45.756930 1049131 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 09:42:45.756960 1049131 ubuntu.go:190] setting up certificates
	I1108 09:42:45.756968 1049131 provision.go:84] configureAuth start
	I1108 09:42:45.757034 1049131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386623
	I1108 09:42:45.774737 1049131 provision.go:143] copyHostCerts
	I1108 09:42:45.774795 1049131 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 09:42:45.774802 1049131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 09:42:45.774885 1049131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 09:42:45.774991 1049131 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 09:42:45.774996 1049131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 09:42:45.775024 1049131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 09:42:45.775082 1049131 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 09:42:45.775085 1049131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 09:42:45.775108 1049131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 09:42:45.775163 1049131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.functional-386623 san=[127.0.0.1 192.168.49.2 functional-386623 localhost minikube]
	I1108 09:42:45.987479 1049131 provision.go:177] copyRemoteCerts
	I1108 09:42:45.987540 1049131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:42:45.987576 1049131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:42:46.005288 1049131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
	I1108 09:42:46.116428 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 09:42:46.134357 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:42:46.152599 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:42:46.170641 1049131 provision.go:87] duration metric: took 413.648365ms to configureAuth
	I1108 09:42:46.170657 1049131 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:42:46.170868 1049131 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:42:46.170981 1049131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:42:46.194189 1049131 main.go:143] libmachine: Using SSH client type: native
	I1108 09:42:46.194492 1049131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34235 <nil> <nil>}
	I1108 09:42:46.194505 1049131 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:42:51.562918 1049131 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:42:51.562930 1049131 machine.go:97] duration metric: took 6.387976536s to provisionDockerMachine
	I1108 09:42:51.562939 1049131 start.go:293] postStartSetup for "functional-386623" (driver="docker")
	I1108 09:42:51.562948 1049131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:42:51.563010 1049131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:42:51.563046 1049131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:42:51.583349 1049131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
	I1108 09:42:51.688320 1049131 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:42:51.691657 1049131 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:42:51.691676 1049131 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:42:51.691692 1049131 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 09:42:51.691747 1049131 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 09:42:51.691822 1049131 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 09:42:51.691897 1049131 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/test/nested/copy/1029234/hosts -> hosts in /etc/test/nested/copy/1029234
	I1108 09:42:51.691940 1049131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1029234
	I1108 09:42:51.699348 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 09:42:51.716743 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/test/nested/copy/1029234/hosts --> /etc/test/nested/copy/1029234/hosts (40 bytes)
	I1108 09:42:51.734138 1049131 start.go:296] duration metric: took 171.184638ms for postStartSetup
	I1108 09:42:51.734208 1049131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:42:51.734246 1049131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:42:51.751036 1049131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
	I1108 09:42:51.853447 1049131 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:42:51.858142 1049131 fix.go:56] duration metric: took 6.708550735s for fixHost
	I1108 09:42:51.858157 1049131 start.go:83] releasing machines lock for "functional-386623", held for 6.708603501s
	I1108 09:42:51.858228 1049131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386623
	I1108 09:42:51.874646 1049131 ssh_runner.go:195] Run: cat /version.json
	I1108 09:42:51.874673 1049131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:42:51.874687 1049131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:42:51.874731 1049131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:42:51.893555 1049131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
	I1108 09:42:51.895173 1049131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
	I1108 09:42:52.094218 1049131 ssh_runner.go:195] Run: systemctl --version
	I1108 09:42:52.100864 1049131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:42:52.138236 1049131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:42:52.142557 1049131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:42:52.142629 1049131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:42:52.150308 1049131 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:42:52.150322 1049131 start.go:496] detecting cgroup driver to use...
	I1108 09:42:52.150352 1049131 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 09:42:52.150396 1049131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:42:52.165791 1049131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:42:52.178836 1049131 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:42:52.178902 1049131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:42:52.194274 1049131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:42:52.207173 1049131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:42:52.334936 1049131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:42:52.463580 1049131 docker.go:234] disabling docker service ...
	I1108 09:42:52.463650 1049131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:42:52.478862 1049131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:42:52.491658 1049131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:42:52.626687 1049131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:42:52.755757 1049131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:42:52.768341 1049131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:42:52.783010 1049131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:42:52.783064 1049131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:42:52.793590 1049131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 09:42:52.793646 1049131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:42:52.802920 1049131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:42:52.811747 1049131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:42:52.820653 1049131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:42:52.828697 1049131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:42:52.837510 1049131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:42:52.845830 1049131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:42:52.854605 1049131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:42:52.862306 1049131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:42:52.869751 1049131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:42:52.995081 1049131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:42:53.217669 1049131 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:42:53.217737 1049131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:42:53.222152 1049131 start.go:564] Will wait 60s for crictl version
	I1108 09:42:53.222220 1049131 ssh_runner.go:195] Run: which crictl
	I1108 09:42:53.225812 1049131 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:42:53.253100 1049131 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:42:53.253184 1049131 ssh_runner.go:195] Run: crio --version
	I1108 09:42:53.285463 1049131 ssh_runner.go:195] Run: crio --version
	I1108 09:42:53.318362 1049131 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:42:53.321505 1049131 cli_runner.go:164] Run: docker network inspect functional-386623 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:42:53.337489 1049131 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1108 09:42:53.344665 1049131 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1108 09:42:53.347575 1049131 kubeadm.go:884] updating cluster {Name:functional-386623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-386623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:42:53.347722 1049131 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:42:53.347807 1049131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:42:53.381998 1049131 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:42:53.382010 1049131 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:42:53.382062 1049131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:42:53.411942 1049131 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:42:53.411954 1049131 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:42:53.411961 1049131 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1108 09:42:53.412068 1049131 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-386623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-386623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:42:53.412152 1049131 ssh_runner.go:195] Run: crio config
	I1108 09:42:53.488150 1049131 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1108 09:42:53.488170 1049131 cni.go:84] Creating CNI manager for ""
	I1108 09:42:53.488178 1049131 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:42:53.488186 1049131 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:42:53.488208 1049131 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386623 NodeName:functional-386623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:42:53.488330 1049131 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-386623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:42:53.488393 1049131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:42:53.499716 1049131 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:42:53.499785 1049131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:42:53.507376 1049131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 09:42:53.529401 1049131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:42:53.547209 1049131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1108 09:42:53.566639 1049131 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:42:53.571510 1049131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:42:53.721961 1049131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:42:53.735095 1049131 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623 for IP: 192.168.49.2
	I1108 09:42:53.735106 1049131 certs.go:195] generating shared ca certs ...
	I1108 09:42:53.735120 1049131 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:42:53.735260 1049131 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 09:42:53.735304 1049131 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 09:42:53.735310 1049131 certs.go:257] generating profile certs ...
	I1108 09:42:53.735392 1049131 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.key
	I1108 09:42:53.735435 1049131 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/apiserver.key.89489bb6
	I1108 09:42:53.735475 1049131 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/proxy-client.key
	I1108 09:42:53.735576 1049131 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 09:42:53.735602 1049131 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 09:42:53.735609 1049131 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:42:53.735633 1049131 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 09:42:53.735657 1049131 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:42:53.735678 1049131 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 09:42:53.735734 1049131 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 09:42:53.736367 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:42:53.754656 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:42:53.771518 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:42:53.789187 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 09:42:53.810803 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:42:53.828062 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 09:42:53.845373 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:42:53.862341 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:42:53.880246 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:42:53.898096 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 09:42:53.915987 1049131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 09:42:53.933822 1049131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:42:53.947247 1049131 ssh_runner.go:195] Run: openssl version
	I1108 09:42:53.953422 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:42:53.961836 1049131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:42:53.965402 1049131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:42:53.965458 1049131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:42:54.006275 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:42:54.016676 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 09:42:54.029131 1049131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 09:42:54.033512 1049131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 09:42:54.033582 1049131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 09:42:54.075850 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 09:42:54.084173 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 09:42:54.092555 1049131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 09:42:54.096254 1049131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 09:42:54.096309 1049131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 09:42:54.137394 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:42:54.145291 1049131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:42:54.148918 1049131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:42:54.189551 1049131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:42:54.230317 1049131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:42:54.271150 1049131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:42:54.311825 1049131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:42:54.352556 1049131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:42:54.393293 1049131 kubeadm.go:401] StartCluster: {Name:functional-386623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-386623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:42:54.393367 1049131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:42:54.393434 1049131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:42:54.421120 1049131 cri.go:89] found id: "c1e8a64d6f1a966d0a4d7328b1c53fc0570690bed8cf3caed7bff43129f79af2"
	I1108 09:42:54.421132 1049131 cri.go:89] found id: "037b65106a4895081d977c053a04aa48dd6e6e27039c3dfc2f5df8daf3790ab4"
	I1108 09:42:54.421136 1049131 cri.go:89] found id: "c90f0ef51eb4b339383ffdab32db53928e505f640ece880d46be44a2589f225d"
	I1108 09:42:54.421138 1049131 cri.go:89] found id: "8840c0489710491c0a40144fa64b082e82931899a99f87adc7620b7f93c3e698"
	I1108 09:42:54.421141 1049131 cri.go:89] found id: "323cc9f32934843bd586b69708c7ee2cbd4c10ec970e82439ec4852c9c44a7f1"
	I1108 09:42:54.421143 1049131 cri.go:89] found id: "a66b43b637ed3f7ef99ef875c680a7db12e4c9c1cad9415238c0bd014779c8a5"
	I1108 09:42:54.421147 1049131 cri.go:89] found id: "9e90a6b7799eb145686cf8b1ef970d60b545b30ac4fbd0add188da97e218fde4"
	I1108 09:42:54.421149 1049131 cri.go:89] found id: "2c6edc84f3c752bd3aaa8277e425e9f35bd0def99c7c613a0469c3129787d8c4"
	I1108 09:42:54.421152 1049131 cri.go:89] found id: ""
	I1108 09:42:54.421201 1049131 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:42:54.432856 1049131 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:42:54Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:42:54.432940 1049131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:42:54.440478 1049131 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:42:54.440487 1049131 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:42:54.440550 1049131 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:42:54.447853 1049131 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:42:54.448362 1049131 kubeconfig.go:125] found "functional-386623" server: "https://192.168.49.2:8441"
	I1108 09:42:54.449751 1049131 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:42:54.457276 1049131 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-08 09:40:57.052633278 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-08 09:42:53.561557053 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1108 09:42:54.457286 1049131 kubeadm.go:1161] stopping kube-system containers ...
	I1108 09:42:54.457299 1049131 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 09:42:54.457356 1049131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:42:54.486322 1049131 cri.go:89] found id: "c1e8a64d6f1a966d0a4d7328b1c53fc0570690bed8cf3caed7bff43129f79af2"
	I1108 09:42:54.486334 1049131 cri.go:89] found id: "037b65106a4895081d977c053a04aa48dd6e6e27039c3dfc2f5df8daf3790ab4"
	I1108 09:42:54.486337 1049131 cri.go:89] found id: "c90f0ef51eb4b339383ffdab32db53928e505f640ece880d46be44a2589f225d"
	I1108 09:42:54.486340 1049131 cri.go:89] found id: "8840c0489710491c0a40144fa64b082e82931899a99f87adc7620b7f93c3e698"
	I1108 09:42:54.486342 1049131 cri.go:89] found id: "323cc9f32934843bd586b69708c7ee2cbd4c10ec970e82439ec4852c9c44a7f1"
	I1108 09:42:54.486345 1049131 cri.go:89] found id: "a66b43b637ed3f7ef99ef875c680a7db12e4c9c1cad9415238c0bd014779c8a5"
	I1108 09:42:54.486348 1049131 cri.go:89] found id: "9e90a6b7799eb145686cf8b1ef970d60b545b30ac4fbd0add188da97e218fde4"
	I1108 09:42:54.486350 1049131 cri.go:89] found id: "2c6edc84f3c752bd3aaa8277e425e9f35bd0def99c7c613a0469c3129787d8c4"
	I1108 09:42:54.486352 1049131 cri.go:89] found id: ""
	I1108 09:42:54.486357 1049131 cri.go:252] Stopping containers: [c1e8a64d6f1a966d0a4d7328b1c53fc0570690bed8cf3caed7bff43129f79af2 037b65106a4895081d977c053a04aa48dd6e6e27039c3dfc2f5df8daf3790ab4 c90f0ef51eb4b339383ffdab32db53928e505f640ece880d46be44a2589f225d 8840c0489710491c0a40144fa64b082e82931899a99f87adc7620b7f93c3e698 323cc9f32934843bd586b69708c7ee2cbd4c10ec970e82439ec4852c9c44a7f1 a66b43b637ed3f7ef99ef875c680a7db12e4c9c1cad9415238c0bd014779c8a5 9e90a6b7799eb145686cf8b1ef970d60b545b30ac4fbd0add188da97e218fde4 2c6edc84f3c752bd3aaa8277e425e9f35bd0def99c7c613a0469c3129787d8c4]
	I1108 09:42:54.486409 1049131 ssh_runner.go:195] Run: which crictl
	I1108 09:42:54.490088 1049131 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 c1e8a64d6f1a966d0a4d7328b1c53fc0570690bed8cf3caed7bff43129f79af2 037b65106a4895081d977c053a04aa48dd6e6e27039c3dfc2f5df8daf3790ab4 c90f0ef51eb4b339383ffdab32db53928e505f640ece880d46be44a2589f225d 8840c0489710491c0a40144fa64b082e82931899a99f87adc7620b7f93c3e698 323cc9f32934843bd586b69708c7ee2cbd4c10ec970e82439ec4852c9c44a7f1 a66b43b637ed3f7ef99ef875c680a7db12e4c9c1cad9415238c0bd014779c8a5 9e90a6b7799eb145686cf8b1ef970d60b545b30ac4fbd0add188da97e218fde4 2c6edc84f3c752bd3aaa8277e425e9f35bd0def99c7c613a0469c3129787d8c4
	I1108 09:42:54.557059 1049131 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 09:42:54.671298 1049131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:42:54.679011 1049131 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov  8 09:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Nov  8 09:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov  8 09:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Nov  8 09:41 /etc/kubernetes/scheduler.conf
	
	I1108 09:42:54.679065 1049131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1108 09:42:54.686357 1049131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1108 09:42:54.693732 1049131 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:42:54.693786 1049131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:42:54.701177 1049131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1108 09:42:54.708552 1049131 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:42:54.708604 1049131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:42:54.715750 1049131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1108 09:42:54.723250 1049131 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:42:54.723303 1049131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:42:54.730596 1049131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:42:54.738027 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:42:54.786047 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:42:57.163871 1049131 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.37779726s)
	I1108 09:42:57.163930 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:42:57.389947 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:42:57.455411 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:42:57.531549 1049131 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:42:57.531619 1049131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:42:58.032412 1049131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:42:58.532638 1049131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:42:58.550089 1049131 api_server.go:72] duration metric: took 1.018555506s to wait for apiserver process to appear ...
	I1108 09:42:58.550102 1049131 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:42:58.550118 1049131 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1108 09:43:02.261261 1049131 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:43:02.261286 1049131 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:43:02.261298 1049131 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1108 09:43:02.364147 1049131 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:43:02.364165 1049131 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:43:02.550488 1049131 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1108 09:43:02.560998 1049131 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:43:02.561024 1049131 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:43:03.050235 1049131 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1108 09:43:03.064661 1049131 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:43:03.064691 1049131 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:43:03.550220 1049131 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1108 09:43:03.558984 1049131 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1108 09:43:03.577093 1049131 api_server.go:141] control plane version: v1.34.1
	I1108 09:43:03.577111 1049131 api_server.go:131] duration metric: took 5.027003516s to wait for apiserver health ...
	I1108 09:43:03.577119 1049131 cni.go:84] Creating CNI manager for ""
	I1108 09:43:03.577125 1049131 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:43:03.580692 1049131 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:43:03.583565 1049131 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:43:03.588097 1049131 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:43:03.588124 1049131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:43:03.604713 1049131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:43:04.087199 1049131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:43:04.090828 1049131 system_pods.go:59] 8 kube-system pods found
	I1108 09:43:04.090851 1049131 system_pods.go:61] "coredns-66bc5c9577-gxt7d" [4f67a522-03de-4928-bbd1-196a83617b9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:43:04.090860 1049131 system_pods.go:61] "etcd-functional-386623" [7e0ec7d1-0452-40e3-8a9a-08ed8a47b2a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:43:04.090864 1049131 system_pods.go:61] "kindnet-2jbfq" [162863ae-868a-4b16-b308-5c3e5883ee41] Running
	I1108 09:43:04.090871 1049131 system_pods.go:61] "kube-apiserver-functional-386623" [4f276028-9135-4f14-b7b4-5786acb29000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:43:04.090877 1049131 system_pods.go:61] "kube-controller-manager-functional-386623" [6a292b8c-5752-4fb4-b765-f69f55d93a12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:43:04.090881 1049131 system_pods.go:61] "kube-proxy-x4bzj" [f0f95597-fdee-495a-946d-11a67834ba14] Running
	I1108 09:43:04.090888 1049131 system_pods.go:61] "kube-scheduler-functional-386623" [61a80802-8749-4d2d-bbaf-1d1c9e9727cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:43:04.090891 1049131 system_pods.go:61] "storage-provisioner" [0e416398-1230-4cdd-b5c2-bd925a8d0ec6] Running
	I1108 09:43:04.090895 1049131 system_pods.go:74] duration metric: took 3.68691ms to wait for pod list to return data ...
	I1108 09:43:04.090919 1049131 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:43:04.093610 1049131 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 09:43:04.093627 1049131 node_conditions.go:123] node cpu capacity is 2
	I1108 09:43:04.093637 1049131 node_conditions.go:105] duration metric: took 2.714259ms to run NodePressure ...
	I1108 09:43:04.093694 1049131 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 09:43:04.346602 1049131 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1108 09:43:04.349996 1049131 kubeadm.go:744] kubelet initialised
	I1108 09:43:04.350007 1049131 kubeadm.go:745] duration metric: took 3.392018ms waiting for restarted kubelet to initialise ...
	I1108 09:43:04.350020 1049131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:43:04.359258 1049131 ops.go:34] apiserver oom_adj: -16
	I1108 09:43:04.359269 1049131 kubeadm.go:602] duration metric: took 9.918778051s to restartPrimaryControlPlane
	I1108 09:43:04.359277 1049131 kubeadm.go:403] duration metric: took 9.965995328s to StartCluster
	I1108 09:43:04.359292 1049131 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:43:04.359372 1049131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 09:43:04.359982 1049131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:43:04.360418 1049131 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:43:04.360477 1049131 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:43:04.360542 1049131 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:43:04.360848 1049131 addons.go:70] Setting storage-provisioner=true in profile "functional-386623"
	I1108 09:43:04.360862 1049131 addons.go:239] Setting addon storage-provisioner=true in "functional-386623"
	W1108 09:43:04.360868 1049131 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:43:04.360900 1049131 host.go:66] Checking if "functional-386623" exists ...
	I1108 09:43:04.361359 1049131 cli_runner.go:164] Run: docker container inspect functional-386623 --format={{.State.Status}}
	I1108 09:43:04.361506 1049131 addons.go:70] Setting default-storageclass=true in profile "functional-386623"
	I1108 09:43:04.361520 1049131 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-386623"
	I1108 09:43:04.361858 1049131 cli_runner.go:164] Run: docker container inspect functional-386623 --format={{.State.Status}}
	I1108 09:43:04.366306 1049131 out.go:179] * Verifying Kubernetes components...
	I1108 09:43:04.369475 1049131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:43:04.389265 1049131 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:43:04.400632 1049131 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:43:04.400644 1049131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:43:04.400713 1049131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:43:04.412732 1049131 addons.go:239] Setting addon default-storageclass=true in "functional-386623"
	W1108 09:43:04.412744 1049131 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:43:04.412770 1049131 host.go:66] Checking if "functional-386623" exists ...
	I1108 09:43:04.413275 1049131 cli_runner.go:164] Run: docker container inspect functional-386623 --format={{.State.Status}}
	I1108 09:43:04.440648 1049131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
	I1108 09:43:04.460979 1049131 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:43:04.460992 1049131 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:43:04.461058 1049131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:43:04.506260 1049131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
	I1108 09:43:04.639978 1049131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:43:04.719038 1049131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:43:04.737629 1049131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:43:05.498100 1049131 node_ready.go:35] waiting up to 6m0s for node "functional-386623" to be "Ready" ...
	I1108 09:43:05.502112 1049131 node_ready.go:49] node "functional-386623" is "Ready"
	I1108 09:43:05.502127 1049131 node_ready.go:38] duration metric: took 3.997179ms for node "functional-386623" to be "Ready" ...
	I1108 09:43:05.502138 1049131 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:43:05.502223 1049131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:43:05.508374 1049131 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:43:05.511198 1049131 addons.go:515] duration metric: took 1.150637602s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:43:05.516138 1049131 api_server.go:72] duration metric: took 1.155633492s to wait for apiserver process to appear ...
	I1108 09:43:05.516151 1049131 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:43:05.516168 1049131 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1108 09:43:05.525387 1049131 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1108 09:43:05.526401 1049131 api_server.go:141] control plane version: v1.34.1
	I1108 09:43:05.526416 1049131 api_server.go:131] duration metric: took 10.259496ms to wait for apiserver health ...
	I1108 09:43:05.526424 1049131 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:43:05.529153 1049131 system_pods.go:59] 8 kube-system pods found
	I1108 09:43:05.529172 1049131 system_pods.go:61] "coredns-66bc5c9577-gxt7d" [4f67a522-03de-4928-bbd1-196a83617b9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:43:05.529179 1049131 system_pods.go:61] "etcd-functional-386623" [7e0ec7d1-0452-40e3-8a9a-08ed8a47b2a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:43:05.529184 1049131 system_pods.go:61] "kindnet-2jbfq" [162863ae-868a-4b16-b308-5c3e5883ee41] Running
	I1108 09:43:05.529190 1049131 system_pods.go:61] "kube-apiserver-functional-386623" [4f276028-9135-4f14-b7b4-5786acb29000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:43:05.529197 1049131 system_pods.go:61] "kube-controller-manager-functional-386623" [6a292b8c-5752-4fb4-b765-f69f55d93a12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:43:05.529200 1049131 system_pods.go:61] "kube-proxy-x4bzj" [f0f95597-fdee-495a-946d-11a67834ba14] Running
	I1108 09:43:05.529206 1049131 system_pods.go:61] "kube-scheduler-functional-386623" [61a80802-8749-4d2d-bbaf-1d1c9e9727cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:43:05.529209 1049131 system_pods.go:61] "storage-provisioner" [0e416398-1230-4cdd-b5c2-bd925a8d0ec6] Running
	I1108 09:43:05.529215 1049131 system_pods.go:74] duration metric: took 2.786069ms to wait for pod list to return data ...
	I1108 09:43:05.529222 1049131 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:43:05.531258 1049131 default_sa.go:45] found service account: "default"
	I1108 09:43:05.531270 1049131 default_sa.go:55] duration metric: took 2.043835ms for default service account to be created ...
	I1108 09:43:05.531278 1049131 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:43:05.534010 1049131 system_pods.go:86] 8 kube-system pods found
	I1108 09:43:05.534028 1049131 system_pods.go:89] "coredns-66bc5c9577-gxt7d" [4f67a522-03de-4928-bbd1-196a83617b9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:43:05.534038 1049131 system_pods.go:89] "etcd-functional-386623" [7e0ec7d1-0452-40e3-8a9a-08ed8a47b2a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:43:05.534042 1049131 system_pods.go:89] "kindnet-2jbfq" [162863ae-868a-4b16-b308-5c3e5883ee41] Running
	I1108 09:43:05.534048 1049131 system_pods.go:89] "kube-apiserver-functional-386623" [4f276028-9135-4f14-b7b4-5786acb29000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:43:05.534060 1049131 system_pods.go:89] "kube-controller-manager-functional-386623" [6a292b8c-5752-4fb4-b765-f69f55d93a12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:43:05.534065 1049131 system_pods.go:89] "kube-proxy-x4bzj" [f0f95597-fdee-495a-946d-11a67834ba14] Running
	I1108 09:43:05.534071 1049131 system_pods.go:89] "kube-scheduler-functional-386623" [61a80802-8749-4d2d-bbaf-1d1c9e9727cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:43:05.534074 1049131 system_pods.go:89] "storage-provisioner" [0e416398-1230-4cdd-b5c2-bd925a8d0ec6] Running
	I1108 09:43:05.534080 1049131 system_pods.go:126] duration metric: took 2.797973ms to wait for k8s-apps to be running ...
	I1108 09:43:05.534086 1049131 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:43:05.534143 1049131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:43:05.548098 1049131 system_svc.go:56] duration metric: took 14.001814ms WaitForService to wait for kubelet
	I1108 09:43:05.548114 1049131 kubeadm.go:587] duration metric: took 1.187614739s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:43:05.548130 1049131 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:43:05.550681 1049131 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 09:43:05.550696 1049131 node_conditions.go:123] node cpu capacity is 2
	I1108 09:43:05.550707 1049131 node_conditions.go:105] duration metric: took 2.571846ms to run NodePressure ...
	I1108 09:43:05.550719 1049131 start.go:242] waiting for startup goroutines ...
	I1108 09:43:05.550725 1049131 start.go:247] waiting for cluster config update ...
	I1108 09:43:05.550735 1049131 start.go:256] writing updated cluster config ...
	I1108 09:43:05.551051 1049131 ssh_runner.go:195] Run: rm -f paused
	I1108 09:43:05.554651 1049131 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:43:05.558431 1049131 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gxt7d" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:43:07.564863 1049131 pod_ready.go:104] pod "coredns-66bc5c9577-gxt7d" is not "Ready", error: <nil>
	I1108 09:43:08.564761 1049131 pod_ready.go:94] pod "coredns-66bc5c9577-gxt7d" is "Ready"
	I1108 09:43:08.564776 1049131 pod_ready.go:86] duration metric: took 3.006331636s for pod "coredns-66bc5c9577-gxt7d" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:43:08.567777 1049131 pod_ready.go:83] waiting for pod "etcd-functional-386623" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:43:10.573960 1049131 pod_ready.go:94] pod "etcd-functional-386623" is "Ready"
	I1108 09:43:10.573974 1049131 pod_ready.go:86] duration metric: took 2.006184667s for pod "etcd-functional-386623" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:43:10.576061 1049131 pod_ready.go:83] waiting for pod "kube-apiserver-functional-386623" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:43:12.581672 1049131 pod_ready.go:104] pod "kube-apiserver-functional-386623" is not "Ready", error: <nil>
	I1108 09:43:13.081306 1049131 pod_ready.go:94] pod "kube-apiserver-functional-386623" is "Ready"
	I1108 09:43:13.081320 1049131 pod_ready.go:86] duration metric: took 2.505247199s for pod "kube-apiserver-functional-386623" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:43:13.084026 1049131 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-386623" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:43:14.091055 1049131 pod_ready.go:94] pod "kube-controller-manager-functional-386623" is "Ready"
	I1108 09:43:14.091070 1049131 pod_ready.go:86] duration metric: took 1.007031808s for pod "kube-controller-manager-functional-386623" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:43:14.093688 1049131 pod_ready.go:83] waiting for pod "kube-proxy-x4bzj" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:43:14.098937 1049131 pod_ready.go:94] pod "kube-proxy-x4bzj" is "Ready"
	I1108 09:43:14.098953 1049131 pod_ready.go:86] duration metric: took 5.251981ms for pod "kube-proxy-x4bzj" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:43:14.101874 1049131 pod_ready.go:83] waiting for pod "kube-scheduler-functional-386623" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:43:14.362647 1049131 pod_ready.go:94] pod "kube-scheduler-functional-386623" is "Ready"
	I1108 09:43:14.362661 1049131 pod_ready.go:86] duration metric: took 260.773664ms for pod "kube-scheduler-functional-386623" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:43:14.362672 1049131 pod_ready.go:40] duration metric: took 8.808000071s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:43:14.417257 1049131 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 09:43:14.422228 1049131 out.go:179] * Done! kubectl is now configured to use "functional-386623" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:43:57 functional-386623 crio[3575]: time="2025-11-08T09:43:57.543621423Z" level=info msg="Stopped pod sandbox (already stopped): f30b48bcfb97c4d37d6f6deb4ad4f3441335d4b5bd569e96f916bf5a66f0c7b1" id=71aa9eee-6b65-43c9-8fdb-d080a6b33768 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:43:57 functional-386623 crio[3575]: time="2025-11-08T09:43:57.544046823Z" level=info msg="Removing pod sandbox: f30b48bcfb97c4d37d6f6deb4ad4f3441335d4b5bd569e96f916bf5a66f0c7b1" id=adc30317-50c9-4710-8f4f-8b455e7e40e0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:43:57 functional-386623 crio[3575]: time="2025-11-08T09:43:57.547552364Z" level=info msg="Removed pod sandbox: f30b48bcfb97c4d37d6f6deb4ad4f3441335d4b5bd569e96f916bf5a66f0c7b1" id=adc30317-50c9-4710-8f4f-8b455e7e40e0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:43:57 functional-386623 crio[3575]: time="2025-11-08T09:43:57.548211538Z" level=info msg="Stopping pod sandbox: 18119a5018ced25a6f26fc9452bcda905243e4bd22310a7de9ccaad2c7dec7c9" id=4853dc8d-c63b-44ab-8f1c-3ac2678c075e name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:43:57 functional-386623 crio[3575]: time="2025-11-08T09:43:57.548259069Z" level=info msg="Stopped pod sandbox (already stopped): 18119a5018ced25a6f26fc9452bcda905243e4bd22310a7de9ccaad2c7dec7c9" id=4853dc8d-c63b-44ab-8f1c-3ac2678c075e name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:43:57 functional-386623 crio[3575]: time="2025-11-08T09:43:57.548692978Z" level=info msg="Removing pod sandbox: 18119a5018ced25a6f26fc9452bcda905243e4bd22310a7de9ccaad2c7dec7c9" id=789f29a7-49a0-41c7-8565-5aa58e875aa8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:43:57 functional-386623 crio[3575]: time="2025-11-08T09:43:57.552091527Z" level=info msg="Removed pod sandbox: 18119a5018ced25a6f26fc9452bcda905243e4bd22310a7de9ccaad2c7dec7c9" id=789f29a7-49a0-41c7-8565-5aa58e875aa8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:44:00 functional-386623 crio[3575]: time="2025-11-08T09:44:00.885993946Z" level=info msg="Running pod sandbox: default/hello-node-75c85bcc94-9ccsc/POD" id=05f7ecfa-2528-4b89-b44e-cb89272effbd name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:44:00 functional-386623 crio[3575]: time="2025-11-08T09:44:00.886064049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:44:00 functional-386623 crio[3575]: time="2025-11-08T09:44:00.891757414Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-9ccsc Namespace:default ID:995911361a77b7ce570ce84bdef7e76ede2a8ba4b31bcd0aca0f2610a2c9cabd UID:3a3dd699-8f1b-474a-9283-57d27296d879 NetNS:/var/run/netns/2b05762e-9342-4ffc-b431-17593d378ac5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cb48}] Aliases:map[]}"
	Nov 08 09:44:00 functional-386623 crio[3575]: time="2025-11-08T09:44:00.891938627Z" level=info msg="Adding pod default_hello-node-75c85bcc94-9ccsc to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:44:00 functional-386623 crio[3575]: time="2025-11-08T09:44:00.903478468Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-9ccsc Namespace:default ID:995911361a77b7ce570ce84bdef7e76ede2a8ba4b31bcd0aca0f2610a2c9cabd UID:3a3dd699-8f1b-474a-9283-57d27296d879 NetNS:/var/run/netns/2b05762e-9342-4ffc-b431-17593d378ac5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cb48}] Aliases:map[]}"
	Nov 08 09:44:00 functional-386623 crio[3575]: time="2025-11-08T09:44:00.903630538Z" level=info msg="Checking pod default_hello-node-75c85bcc94-9ccsc for CNI network kindnet (type=ptp)"
	Nov 08 09:44:00 functional-386623 crio[3575]: time="2025-11-08T09:44:00.906403979Z" level=info msg="Ran pod sandbox 995911361a77b7ce570ce84bdef7e76ede2a8ba4b31bcd0aca0f2610a2c9cabd with infra container: default/hello-node-75c85bcc94-9ccsc/POD" id=05f7ecfa-2528-4b89-b44e-cb89272effbd name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:44:00 functional-386623 crio[3575]: time="2025-11-08T09:44:00.90915781Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=146135dc-960d-49cf-bc47-f46f3bf48c71 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:44:01 functional-386623 crio[3575]: time="2025-11-08T09:44:01.58819587Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2173b02b-7172-460c-80fa-e2ae10b2ad8f name=/runtime.v1.ImageService/PullImage
	Nov 08 09:44:13 functional-386623 crio[3575]: time="2025-11-08T09:44:13.588129369Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b7971db5-3202-4471-a68b-13a10432e44c name=/runtime.v1.ImageService/PullImage
	Nov 08 09:44:27 functional-386623 crio[3575]: time="2025-11-08T09:44:27.587905568Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=89d64dac-8178-4d90-bfda-04a3de7f12ea name=/runtime.v1.ImageService/PullImage
	Nov 08 09:44:39 functional-386623 crio[3575]: time="2025-11-08T09:44:39.589224759Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=26c02179-c81c-4452-adef-4a086e08ff05 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:45:17 functional-386623 crio[3575]: time="2025-11-08T09:45:17.589414906Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=24fd8935-f135-46fe-af35-75c5fb8ec0d1 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:45:29 functional-386623 crio[3575]: time="2025-11-08T09:45:29.588148888Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a6c79779-692e-4b30-a8a5-a33be0546600 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:46:38 functional-386623 crio[3575]: time="2025-11-08T09:46:38.587599177Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5e6880dd-7da8-4300-85a2-0e69f5c2b6ca name=/runtime.v1.ImageService/PullImage
	Nov 08 09:46:57 functional-386623 crio[3575]: time="2025-11-08T09:46:57.588180826Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c5594998-ec6f-481b-8d87-78ba6187505d name=/runtime.v1.ImageService/PullImage
	Nov 08 09:49:24 functional-386623 crio[3575]: time="2025-11-08T09:49:24.5880015Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e6ff107c-b176-47ea-9f8f-99c7a831911c name=/runtime.v1.ImageService/PullImage
	Nov 08 09:49:47 functional-386623 crio[3575]: time="2025-11-08T09:49:47.588725412Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=744f4ce9-269b-48d8-b306-a62f5d06030d name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	268676bfbfcf0       docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33       9 minutes ago       Running             myfrontend                0                   bb292fc9e109c       sp-pod                                      default
	433a258acec38       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   10 minutes ago      Exited              mount-munger              0                   723bef03c4da1       busybox-mount                               default
	bd1fae75a274c       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90       10 minutes ago      Running             nginx                     0                   13caa5ed6bd11       nginx-svc                                   default
	bff6bbd4fa396       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      10 minutes ago      Running             storage-provisioner       3                   9fd93c099cec2       storage-provisioner                         kube-system
	2d93ec4a0261b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      10 minutes ago      Running             kindnet-cni               2                   30b1fa0697819       kindnet-2jbfq                               kube-system
	ad5ecec0f2691       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      10 minutes ago      Running             kube-proxy                2                   a5e21b92e657a       kube-proxy-x4bzj                            kube-system
	a76854b0c6bda       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      10 minutes ago      Running             coredns                   2                   e01d51f67c4c1       coredns-66bc5c9577-gxt7d                    kube-system
	e1bf582d80dd1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      10 minutes ago      Running             kube-apiserver            0                   68ee589e74cda       kube-apiserver-functional-386623            kube-system
	40b8e0c28ef8e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      10 minutes ago      Running             etcd                      2                   cd7427dedf3e6       etcd-functional-386623                      kube-system
	5c4deebd862bf       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      10 minutes ago      Running             kube-controller-manager   2                   eeea69a39f496       kube-controller-manager-functional-386623   kube-system
	f219cf1763561       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      10 minutes ago      Running             kube-scheduler            2                   a046fce8e2e58       kube-scheduler-functional-386623            kube-system
	c1e8a64d6f1a9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      11 minutes ago      Exited              storage-provisioner       2                   9fd93c099cec2       storage-provisioner                         kube-system
	037b65106a489       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      11 minutes ago      Exited              kube-scheduler            1                   a046fce8e2e58       kube-scheduler-functional-386623            kube-system
	c90f0ef51eb4b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      11 minutes ago      Exited              kube-controller-manager   1                   eeea69a39f496       kube-controller-manager-functional-386623   kube-system
	8840c04897104       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      11 minutes ago      Exited              kube-proxy                1                   a5e21b92e657a       kube-proxy-x4bzj                            kube-system
	323cc9f329348       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      11 minutes ago      Exited              coredns                   1                   e01d51f67c4c1       coredns-66bc5c9577-gxt7d                    kube-system
	a66b43b637ed3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      11 minutes ago      Exited              kindnet-cni               1                   30b1fa0697819       kindnet-2jbfq                               kube-system
	2c6edc84f3c75       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      11 minutes ago      Exited              etcd                      1                   cd7427dedf3e6       etcd-functional-386623                      kube-system
	
	
	==> coredns [323cc9f32934843bd586b69708c7ee2cbd4c10ec970e82439ec4852c9c44a7f1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48974 - 11020 "HINFO IN 8904710685247059692.2923008074401556851. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.053664596s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a76854b0c6bda8582c7e30f04c8675cac87f9749b01792f99310529f8a3f6d91] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58762 - 34218 "HINFO IN 1396748533894792955.7940964728207128259. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033345469s
	
	
	==> describe nodes <==
	Name:               functional-386623
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-386623
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=functional-386623
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_41_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:41:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-386623
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:53:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:52:26 +0000   Sat, 08 Nov 2025 09:41:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:52:26 +0000   Sat, 08 Nov 2025 09:41:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:52:26 +0000   Sat, 08 Nov 2025 09:41:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:52:26 +0000   Sat, 08 Nov 2025 09:42:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-386623
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c374a0fe-e9f7-4df7-9246-75f29581bbcf
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-9ccsc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  default                     hello-node-connect-7d85dfc575-xpfdf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-gxt7d                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-386623                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-2jbfq                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-386623             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-386623    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-x4bzj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-386623             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-386623 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-386623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-386623 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-386623 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-386623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-386623 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-386623 event: Registered Node functional-386623 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-386623 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-386623 event: Registered Node functional-386623 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-386623 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-386623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-386623 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-386623 event: Registered Node functional-386623 in Controller
	
	
	==> dmesg <==
	[ +27.402772] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:14] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:15] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:16] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:18] overlayfs: idmapped layers are currently not supported
	[  +7.306773] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:20] overlayfs: idmapped layers are currently not supported
	[ +10.554062] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:21] overlayfs: idmapped layers are currently not supported
	[ +13.395960] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:23] overlayfs: idmapped layers are currently not supported
	[ +14.098822] overlayfs: idmapped layers are currently not supported
	[ +16.951080] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:24] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:27] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:28] overlayfs: idmapped layers are currently not supported
	[ +11.539282] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:30] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:32] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 8 09:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:40] overlayfs: idmapped layers are currently not supported
	[Nov 8 09:41] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2c6edc84f3c752bd3aaa8277e425e9f35bd0def99c7c613a0469c3129787d8c4] <==
	{"level":"warn","ts":"2025-11-08T09:42:20.086915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:42:20.096933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:42:20.113346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:42:20.144925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:42:20.159367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:42:20.174539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:42:20.241503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49790","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:42:46.370102Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-08T09:42:46.370152Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-386623","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-08T09:42:46.370245Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T09:42:46.654002Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T09:42:46.655411Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:42:46.655459Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-08T09:42:46.655525Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-08T09:42:46.655543Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-08T09:42:46.655584Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T09:42:46.655662Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T09:42:46.655712Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-08T09:42:46.655796Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T09:42:46.655815Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T09:42:46.655825Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:42:46.659483Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-08T09:42:46.659561Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:42:46.659591Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-08T09:42:46.659598Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-386623","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [40b8e0c28ef8e8001108910e89cce29706c4bd208055e5447642396a1e634f19] <==
	{"level":"warn","ts":"2025-11-08T09:43:00.703171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:00.732569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:00.750706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:00.780416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:00.810133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:00.835673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:00.882503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:00.912680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:00.933106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:00.965076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:00.994689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:01.049172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:01.084988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:01.105283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:01.127048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:01.159012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:01.174853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:01.189713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:01.247832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:01.268659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:01.284554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:43:01.374557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41268","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:52:59.430249Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1136}
	{"level":"info","ts":"2025-11-08T09:52:59.454122Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1136,"took":"23.499379ms","hash":1303830917,"current-db-size-bytes":3211264,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1441792,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-08T09:52:59.454169Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1303830917,"revision":1136,"compact-revision":-1}
	
	
	==> kernel <==
	 09:53:47 up  8:36,  0 user,  load average: 0.10, 0.43, 1.26
	Linux functional-386623 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2d93ec4a0261bbdbcce2397b63acda33fe385e505332cd8c00bfc4693bec8fdb] <==
	I1108 09:51:43.250654       1 main.go:301] handling current node
	I1108 09:51:53.251058       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:51:53.251097       1 main.go:301] handling current node
	I1108 09:52:03.252183       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:52:03.252295       1 main.go:301] handling current node
	I1108 09:52:13.255690       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:52:13.255722       1 main.go:301] handling current node
	I1108 09:52:23.253141       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:52:23.253172       1 main.go:301] handling current node
	I1108 09:52:33.251676       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:52:33.251710       1 main.go:301] handling current node
	I1108 09:52:43.248893       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:52:43.248926       1 main.go:301] handling current node
	I1108 09:52:53.252590       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:52:53.252624       1 main.go:301] handling current node
	I1108 09:53:03.248842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:53:03.248953       1 main.go:301] handling current node
	I1108 09:53:13.255609       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:53:13.255642       1 main.go:301] handling current node
	I1108 09:53:23.255402       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:53:23.255436       1 main.go:301] handling current node
	I1108 09:53:33.248890       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:53:33.248923       1 main.go:301] handling current node
	I1108 09:53:43.248995       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:53:43.249029       1 main.go:301] handling current node
	
	
	==> kindnet [a66b43b637ed3f7ef99ef875c680a7db12e4c9c1cad9415238c0bd014779c8a5] <==
	I1108 09:42:17.483738       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:42:17.483938       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1108 09:42:17.484062       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:42:17.484074       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:42:17.484083       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:42:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:42:17.634665       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:42:17.712502       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:42:17.712610       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:42:17.712799       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 09:42:21.230065       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 09:42:22.313462       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:42:22.313602       1 metrics.go:72] Registering metrics
	I1108 09:42:22.313689       1 controller.go:711] "Syncing nftables rules"
	I1108 09:42:27.638345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:42:27.638403       1 main.go:301] handling current node
	I1108 09:42:37.634577       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:42:37.634610       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e1bf582d80dd101b956ad824b4d05f6ccf22ba9c5a5e1417ffa1f685c2f76474] <==
	I1108 09:43:02.484551       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:43:02.488849       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:43:02.489542       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:43:02.490006       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:43:02.490056       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 09:43:02.492329       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:43:02.495057       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:43:02.508149       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:43:02.509969       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:43:02.584550       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:43:03.115825       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:43:04.080071       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:43:04.197100       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:43:04.264952       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:43:04.272916       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:43:05.806538       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:43:06.053859       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:43:06.151853       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:43:17.753156       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.115.180"}
	I1108 09:43:23.120603       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.250.128"}
	I1108 09:43:45.772009       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.223.199"}
	E1108 09:43:53.025432       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49780: use of closed network connection
	E1108 09:43:53.851950       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1108 09:44:00.667422       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.218.190"}
	I1108 09:53:02.398622       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5c4deebd862bf273ddedc26d29ff075997a79e7bf5ded55c95e693e61aebfbb9] <==
	I1108 09:43:05.804705       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:43:05.804791       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:43:05.806267       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 09:43:05.810579       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:43:05.811874       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:43:05.813156       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:43:05.813844       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:43:05.817438       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:43:05.821742       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:43:05.822969       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:43:05.823048       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:43:05.824262       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:43:05.828599       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:43:05.831881       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:43:05.834143       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:43:05.839450       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:43:05.845198       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:43:05.845257       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:43:05.845320       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:43:05.845459       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:43:05.845521       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:43:05.845569       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:43:05.846961       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:43:05.865801       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:43:05.870114       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-controller-manager [c90f0ef51eb4b339383ffdab32db53928e505f640ece880d46be44a2589f225d] <==
	I1108 09:42:24.304487       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:42:24.304521       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:42:24.304531       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:42:24.304537       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:42:24.306976       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:42:24.309133       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:42:24.313338       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:42:24.316618       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:42:24.320022       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:42:24.328286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:42:24.335428       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:42:24.343874       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 09:42:24.346393       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:42:24.346487       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:42:24.346538       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:42:24.346610       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:42:24.346853       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:42:24.346911       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:42:24.346961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:42:24.346497       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:42:24.347280       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:42:24.347388       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-386623"
	I1108 09:42:24.347449       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 09:42:24.347517       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:42:24.349015       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	
	
	==> kube-proxy [8840c0489710491c0a40144fa64b082e82931899a99f87adc7620b7f93c3e698] <==
	I1108 09:42:19.886798       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:42:20.150213       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1108 09:42:21.257037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-386623\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1108 09:42:22.252081       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:42:22.252122       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 09:42:22.252191       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:42:22.271269       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:42:22.271323       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:42:22.275130       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:42:22.275416       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:42:22.275438       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:42:22.277066       1 config.go:200] "Starting service config controller"
	I1108 09:42:22.277088       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:42:22.277105       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:42:22.277109       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:42:22.277120       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:42:22.277124       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:42:22.277849       1 config.go:309] "Starting node config controller"
	I1108 09:42:22.277866       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:42:22.277873       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:42:22.377607       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:42:22.377718       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:42:22.377728       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ad5ecec0f2691612871c9c6179576ef0d39f4e47ad9bd9c838d3fe7748446eca] <==
	I1108 09:43:03.183868       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:43:03.396138       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:43:03.500905       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:43:03.501014       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 09:43:03.501113       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:43:03.550826       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:43:03.550991       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:43:03.560186       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:43:03.560623       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:43:03.560808       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:43:03.562166       1 config.go:200] "Starting service config controller"
	I1108 09:43:03.562247       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:43:03.562289       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:43:03.562337       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:43:03.562372       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:43:03.562406       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:43:03.563068       1 config.go:309] "Starting node config controller"
	I1108 09:43:03.563127       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:43:03.563159       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:43:03.662908       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:43:03.662940       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:43:03.662984       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [037b65106a4895081d977c053a04aa48dd6e6e27039c3dfc2f5df8daf3790ab4] <==
	E1108 09:42:21.196936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:42:21.197180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:42:21.197283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:42:21.205094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:42:21.205256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:42:21.205380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:42:21.205489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:42:21.206705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:42:21.206823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:42:21.206923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:42:21.214702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:42:21.214862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:42:21.214976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:42:21.215110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:42:21.215215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:42:21.215340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:42:21.215648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:42:21.215754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found, role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found]" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1108 09:42:21.215915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1108 09:42:22.366741       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:42:46.372409       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1108 09:42:46.372432       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1108 09:42:46.372695       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1108 09:42:46.372963       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1108 09:42:46.372997       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f219cf176356166919dc06113c222e39a6e972530c07bbac68a0d737ba21c3df] <==
	I1108 09:43:02.687360       1 serving.go:386] Generated self-signed cert in-memory
	I1108 09:43:03.335602       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:43:03.335636       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:43:03.340648       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 09:43:03.340763       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 09:43:03.340825       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:43:03.340866       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:43:03.340905       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 09:43:03.340914       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 09:43:03.344345       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:43:03.344419       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:43:03.441814       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 09:43:03.441881       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 09:43:03.441968       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:51:11 functional-386623 kubelet[3900]: E1108 09:51:11.587025    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:51:15 functional-386623 kubelet[3900]: E1108 09:51:15.588578    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:51:22 functional-386623 kubelet[3900]: E1108 09:51:22.587345    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:51:27 functional-386623 kubelet[3900]: E1108 09:51:27.588102    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:51:34 functional-386623 kubelet[3900]: E1108 09:51:34.587579    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:51:40 functional-386623 kubelet[3900]: E1108 09:51:40.587570    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:51:49 functional-386623 kubelet[3900]: E1108 09:51:49.588420    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:51:54 functional-386623 kubelet[3900]: E1108 09:51:54.587066    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:52:04 functional-386623 kubelet[3900]: E1108 09:52:04.587239    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:52:09 functional-386623 kubelet[3900]: E1108 09:52:09.589337    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:52:15 functional-386623 kubelet[3900]: E1108 09:52:15.589141    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:52:24 functional-386623 kubelet[3900]: E1108 09:52:24.587020    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:52:29 functional-386623 kubelet[3900]: E1108 09:52:29.588371    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:52:35 functional-386623 kubelet[3900]: E1108 09:52:35.588258    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:52:41 functional-386623 kubelet[3900]: E1108 09:52:41.588158    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:52:47 functional-386623 kubelet[3900]: E1108 09:52:47.589086    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:52:55 functional-386623 kubelet[3900]: E1108 09:52:55.588713    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:52:58 functional-386623 kubelet[3900]: E1108 09:52:58.586922    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:53:07 functional-386623 kubelet[3900]: E1108 09:53:07.588292    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:53:10 functional-386623 kubelet[3900]: E1108 09:53:10.587882    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:53:20 functional-386623 kubelet[3900]: E1108 09:53:20.587382    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:53:24 functional-386623 kubelet[3900]: E1108 09:53:24.587245    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:53:31 functional-386623 kubelet[3900]: E1108 09:53:31.588730    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	Nov 08 09:53:37 functional-386623 kubelet[3900]: E1108 09:53:37.588383    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xpfdf" podUID="09a4afad-157d-4b1b-8315-1637069d83be"
	Nov 08 09:53:45 functional-386623 kubelet[3900]: E1108 09:53:45.587620    3900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9ccsc" podUID="3a3dd699-8f1b-474a-9283-57d27296d879"
	
	
	==> storage-provisioner [bff6bbd4fa396a6c411f76cf733002cdebd047c9071c5f968023343373b12380] <==
	W1108 09:53:23.345632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:25.348235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:25.352478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:27.356252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:27.363161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:29.366116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:29.370201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:31.373773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:31.378209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:33.381190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:33.387813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:35.390543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:35.395123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:37.398524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:37.403038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:39.406422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:39.410753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:41.414123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:41.420879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:43.423920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:43.429132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:45.434266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:45.441636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:47.445465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:47.452906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c1e8a64d6f1a966d0a4d7328b1c53fc0570690bed8cf3caed7bff43129f79af2] <==
	I1108 09:42:33.949491       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:42:33.960861       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:42:33.960912       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:42:33.963625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:42:37.419955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:42:41.680115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:42:45.285257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386623 -n functional-386623
helpers_test.go:269: (dbg) Run:  kubectl --context functional-386623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-9ccsc hello-node-connect-7d85dfc575-xpfdf
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-386623 describe pod busybox-mount hello-node-75c85bcc94-9ccsc hello-node-connect-7d85dfc575-xpfdf
helpers_test.go:290: (dbg) kubectl --context functional-386623 describe pod busybox-mount hello-node-75c85bcc94-9ccsc hello-node-connect-7d85dfc575-xpfdf:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-386623/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 09:43:34 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://433a258acec38cd72962ec7aebae5fb7fc173012bb85e68612888d04909fe305
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 08 Nov 2025 09:43:37 +0000
	      Finished:     Sat, 08 Nov 2025 09:43:37 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g82dv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-g82dv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-386623
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.316s (2.316s including waiting). Image size: 3774172 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-9ccsc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-386623/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 09:44:00 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fd85x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fd85x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m48s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9ccsc to functional-386623
	  Normal   Pulling    6m52s (x5 over 9m49s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m52s (x5 over 9m49s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m52s (x5 over 9m49s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m44s (x21 over 9m48s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m44s (x21 over 9m48s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-xpfdf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-386623/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 09:43:45 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qz5qp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qz5qp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xpfdf to functional-386623
	  Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image load --daemon kicbase/echo-server:functional-386623 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-386623" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image load --daemon kicbase/echo-server:functional-386623 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-386623" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-386623
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image load --daemon kicbase/echo-server:functional-386623 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-386623" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image save kicbase/echo-server:functional-386623 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1108 09:43:28.647176 1052200 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:43:28.648907 1052200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:43:28.648924 1052200 out.go:374] Setting ErrFile to fd 2...
	I1108 09:43:28.648930 1052200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:43:28.649200 1052200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:43:28.649873 1052200 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:43:28.649973 1052200 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:43:28.650407 1052200 cli_runner.go:164] Run: docker container inspect functional-386623 --format={{.State.Status}}
	I1108 09:43:28.670796 1052200 ssh_runner.go:195] Run: systemctl --version
	I1108 09:43:28.670868 1052200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
	I1108 09:43:28.691882 1052200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
	I1108 09:43:28.795068 1052200 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1108 09:43:28.795128 1052200 cache_images.go:255] Failed to load cached images for "functional-386623": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1108 09:43:28.795150 1052200 cache_images.go:267] failed pushing to: functional-386623

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-386623
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image save --daemon kicbase/echo-server:functional-386623 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-386623
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-386623: exit status 1 (17.938367ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-386623

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-386623

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-386623 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-386623 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-9ccsc" [3a3dd699-8f1b-474a-9283-57d27296d879] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1108 09:44:10.287496 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:46:26.426415 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:46:54.128917 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:51:26.426397 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386623 -n functional-386623
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-08 09:54:01.030842886 +0000 UTC m=+1240.881073955
functional_test.go:1460: (dbg) Run:  kubectl --context functional-386623 describe po hello-node-75c85bcc94-9ccsc -n default
functional_test.go:1460: (dbg) kubectl --context functional-386623 describe po hello-node-75c85bcc94-9ccsc -n default:
Name:             hello-node-75c85bcc94-9ccsc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-386623/192.168.49.2
Start Time:       Sat, 08 Nov 2025 09:44:00 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fd85x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fd85x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9ccsc to functional-386623
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-386623 logs hello-node-75c85bcc94-9ccsc -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-386623 logs hello-node-75c85bcc94-9ccsc -n default: exit status 1 (99.356085ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-9ccsc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-386623 logs hello-node-75c85bcc94-9ccsc -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 service --namespace=default --https --url hello-node: exit status 115 (573.536063ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31026
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-386623 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 service hello-node --url --format={{.IP}}: exit status 115 (470.300435ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-386623 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 service hello-node --url: exit status 115 (416.546885ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31026
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-386623 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31026
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.54s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-266002 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-266002 --output=json --user=testUser: exit status 80 (1.537578335s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6eefecb4-5c70-4b98-b5df-700afca68751","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-266002 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"1198fe94-665b-45c5-8180-63b961792a2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-08T10:06:22Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"5e297638-bbd5-4746-81a5-ee78a1b2a330","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-266002 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.92s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-266002 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-266002 --output=json --user=testUser: exit status 80 (1.922649855s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e13e8c1a-05c5-4f21-b966-3538d4f8ead4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-266002 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"55b7d8fb-9607-45b1-a017-b193aa1e97b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-08T10:06:24Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"bf9aa62a-8c30-48b3-9d8c-4f630d0ee0a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-266002 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.92s)

                                                
                                    
x
+
TestPause/serial/Pause (6.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-343192 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-343192 --alsologtostderr -v=5: exit status 80 (2.204832908s)

                                                
                                                
-- stdout --
	* Pausing node pause-343192 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:28:15.929758 1190134 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:28:15.930551 1190134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:28:15.930568 1190134 out.go:374] Setting ErrFile to fd 2...
	I1108 10:28:15.930574 1190134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:28:15.930897 1190134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:28:15.931203 1190134 out.go:368] Setting JSON to false
	I1108 10:28:15.931247 1190134 mustload.go:66] Loading cluster: pause-343192
	I1108 10:28:15.931698 1190134 config.go:182] Loaded profile config "pause-343192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:28:15.932252 1190134 cli_runner.go:164] Run: docker container inspect pause-343192 --format={{.State.Status}}
	I1108 10:28:15.948509 1190134 host.go:66] Checking if "pause-343192" exists ...
	I1108 10:28:15.948849 1190134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:28:16.006041 1190134 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:28:15.995734146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:28:16.007023 1190134 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-343192 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:28:16.010209 1190134 out.go:179] * Pausing node pause-343192 ... 
	I1108 10:28:16.013878 1190134 host.go:66] Checking if "pause-343192" exists ...
	I1108 10:28:16.014269 1190134 ssh_runner.go:195] Run: systemctl --version
	I1108 10:28:16.014322 1190134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:28:16.032271 1190134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34482 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/pause-343192/id_rsa Username:docker}
	I1108 10:28:16.141456 1190134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:28:16.154901 1190134 pause.go:52] kubelet running: true
	I1108 10:28:16.154985 1190134 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:28:16.355762 1190134 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:28:16.355841 1190134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:28:16.425478 1190134 cri.go:89] found id: "0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f"
	I1108 10:28:16.425503 1190134 cri.go:89] found id: "f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981"
	I1108 10:28:16.425512 1190134 cri.go:89] found id: "cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb"
	I1108 10:28:16.425516 1190134 cri.go:89] found id: "d7a533806da9a332f1645c3360d5c4237a84729469ae7cb42a33daa107441f86"
	I1108 10:28:16.425520 1190134 cri.go:89] found id: "a2e998f95e3dabd458d90198ae4130a56a78b9685b3e0f821b670a31300781b6"
	I1108 10:28:16.425524 1190134 cri.go:89] found id: "808a055bd254c0bbbee4c3c751830708801f4ced02a2c5deb329197a434cd541"
	I1108 10:28:16.425527 1190134 cri.go:89] found id: "7036025861b31b3ce32c7deda2244e7cb402d4a8ef261e6ea3f8a57bb78fce01"
	I1108 10:28:16.425531 1190134 cri.go:89] found id: "1d28edcd8cca7648e1bc0b2fb042df7c5b1f90debfa5083af69296a4afa052d1"
	I1108 10:28:16.425540 1190134 cri.go:89] found id: "cbed26c9cc82d142d3d895dc7635d0efb73e033cb99b08450139b3c5de56c054"
	I1108 10:28:16.425550 1190134 cri.go:89] found id: "a327dc75a2da5df572b9729b0560d0810a03921afea0a1ea766f4032377a4d50"
	I1108 10:28:16.425554 1190134 cri.go:89] found id: "e64d76a590f592ad5123ea146cba17cee655e4c302e7d2c00d65f628678c8146"
	I1108 10:28:16.425557 1190134 cri.go:89] found id: "6cf1df7c69fa46c783c4d0d0ed7275b2f7575903b38be95723c5fadb80a5adb2"
	I1108 10:28:16.425560 1190134 cri.go:89] found id: "7a08c37ef37992bde0d0bd0f71fdddbca47883b01dd90e96da703efd35f23fd8"
	I1108 10:28:16.425563 1190134 cri.go:89] found id: "4c21fbaf9d079fb5c4cbd03ca8e0149295b10f764ae1c6826063a0516b80ba46"
	I1108 10:28:16.425565 1190134 cri.go:89] found id: ""
	I1108 10:28:16.425618 1190134 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:28:16.437434 1190134 retry.go:31] will retry after 160.731459ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:28:16Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:28:16.598841 1190134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:28:16.611834 1190134 pause.go:52] kubelet running: false
	I1108 10:28:16.611902 1190134 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:28:16.760664 1190134 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:28:16.760743 1190134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:28:16.827665 1190134 cri.go:89] found id: "0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f"
	I1108 10:28:16.827691 1190134 cri.go:89] found id: "f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981"
	I1108 10:28:16.827697 1190134 cri.go:89] found id: "cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb"
	I1108 10:28:16.827702 1190134 cri.go:89] found id: "d7a533806da9a332f1645c3360d5c4237a84729469ae7cb42a33daa107441f86"
	I1108 10:28:16.827705 1190134 cri.go:89] found id: "a2e998f95e3dabd458d90198ae4130a56a78b9685b3e0f821b670a31300781b6"
	I1108 10:28:16.827708 1190134 cri.go:89] found id: "808a055bd254c0bbbee4c3c751830708801f4ced02a2c5deb329197a434cd541"
	I1108 10:28:16.827712 1190134 cri.go:89] found id: "7036025861b31b3ce32c7deda2244e7cb402d4a8ef261e6ea3f8a57bb78fce01"
	I1108 10:28:16.827735 1190134 cri.go:89] found id: "1d28edcd8cca7648e1bc0b2fb042df7c5b1f90debfa5083af69296a4afa052d1"
	I1108 10:28:16.827747 1190134 cri.go:89] found id: "cbed26c9cc82d142d3d895dc7635d0efb73e033cb99b08450139b3c5de56c054"
	I1108 10:28:16.827759 1190134 cri.go:89] found id: "a327dc75a2da5df572b9729b0560d0810a03921afea0a1ea766f4032377a4d50"
	I1108 10:28:16.827769 1190134 cri.go:89] found id: "e64d76a590f592ad5123ea146cba17cee655e4c302e7d2c00d65f628678c8146"
	I1108 10:28:16.827773 1190134 cri.go:89] found id: "6cf1df7c69fa46c783c4d0d0ed7275b2f7575903b38be95723c5fadb80a5adb2"
	I1108 10:28:16.827776 1190134 cri.go:89] found id: "7a08c37ef37992bde0d0bd0f71fdddbca47883b01dd90e96da703efd35f23fd8"
	I1108 10:28:16.827779 1190134 cri.go:89] found id: "4c21fbaf9d079fb5c4cbd03ca8e0149295b10f764ae1c6826063a0516b80ba46"
	I1108 10:28:16.827782 1190134 cri.go:89] found id: ""
	I1108 10:28:16.827845 1190134 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:28:16.838655 1190134 retry.go:31] will retry after 255.601622ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:28:16Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:28:17.095143 1190134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:28:17.110207 1190134 pause.go:52] kubelet running: false
	I1108 10:28:17.110300 1190134 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:28:17.259536 1190134 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:28:17.259641 1190134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:28:17.334791 1190134 cri.go:89] found id: "0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f"
	I1108 10:28:17.334814 1190134 cri.go:89] found id: "f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981"
	I1108 10:28:17.334820 1190134 cri.go:89] found id: "cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb"
	I1108 10:28:17.334823 1190134 cri.go:89] found id: "d7a533806da9a332f1645c3360d5c4237a84729469ae7cb42a33daa107441f86"
	I1108 10:28:17.334827 1190134 cri.go:89] found id: "a2e998f95e3dabd458d90198ae4130a56a78b9685b3e0f821b670a31300781b6"
	I1108 10:28:17.334830 1190134 cri.go:89] found id: "808a055bd254c0bbbee4c3c751830708801f4ced02a2c5deb329197a434cd541"
	I1108 10:28:17.334834 1190134 cri.go:89] found id: "7036025861b31b3ce32c7deda2244e7cb402d4a8ef261e6ea3f8a57bb78fce01"
	I1108 10:28:17.334837 1190134 cri.go:89] found id: "1d28edcd8cca7648e1bc0b2fb042df7c5b1f90debfa5083af69296a4afa052d1"
	I1108 10:28:17.334841 1190134 cri.go:89] found id: "cbed26c9cc82d142d3d895dc7635d0efb73e033cb99b08450139b3c5de56c054"
	I1108 10:28:17.334847 1190134 cri.go:89] found id: "a327dc75a2da5df572b9729b0560d0810a03921afea0a1ea766f4032377a4d50"
	I1108 10:28:17.334851 1190134 cri.go:89] found id: "e64d76a590f592ad5123ea146cba17cee655e4c302e7d2c00d65f628678c8146"
	I1108 10:28:17.334854 1190134 cri.go:89] found id: "6cf1df7c69fa46c783c4d0d0ed7275b2f7575903b38be95723c5fadb80a5adb2"
	I1108 10:28:17.334858 1190134 cri.go:89] found id: "7a08c37ef37992bde0d0bd0f71fdddbca47883b01dd90e96da703efd35f23fd8"
	I1108 10:28:17.334863 1190134 cri.go:89] found id: "4c21fbaf9d079fb5c4cbd03ca8e0149295b10f764ae1c6826063a0516b80ba46"
	I1108 10:28:17.334866 1190134 cri.go:89] found id: ""
	I1108 10:28:17.334923 1190134 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:28:17.355881 1190134 retry.go:31] will retry after 461.195377ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:28:17Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:28:17.817453 1190134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:28:17.830322 1190134 pause.go:52] kubelet running: false
	I1108 10:28:17.830416 1190134 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:28:17.971824 1190134 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:28:17.971944 1190134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:28:18.045702 1190134 cri.go:89] found id: "0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f"
	I1108 10:28:18.045777 1190134 cri.go:89] found id: "f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981"
	I1108 10:28:18.045807 1190134 cri.go:89] found id: "cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb"
	I1108 10:28:18.045820 1190134 cri.go:89] found id: "d7a533806da9a332f1645c3360d5c4237a84729469ae7cb42a33daa107441f86"
	I1108 10:28:18.045825 1190134 cri.go:89] found id: "a2e998f95e3dabd458d90198ae4130a56a78b9685b3e0f821b670a31300781b6"
	I1108 10:28:18.045829 1190134 cri.go:89] found id: "808a055bd254c0bbbee4c3c751830708801f4ced02a2c5deb329197a434cd541"
	I1108 10:28:18.045833 1190134 cri.go:89] found id: "7036025861b31b3ce32c7deda2244e7cb402d4a8ef261e6ea3f8a57bb78fce01"
	I1108 10:28:18.045836 1190134 cri.go:89] found id: "1d28edcd8cca7648e1bc0b2fb042df7c5b1f90debfa5083af69296a4afa052d1"
	I1108 10:28:18.045839 1190134 cri.go:89] found id: "cbed26c9cc82d142d3d895dc7635d0efb73e033cb99b08450139b3c5de56c054"
	I1108 10:28:18.045845 1190134 cri.go:89] found id: "a327dc75a2da5df572b9729b0560d0810a03921afea0a1ea766f4032377a4d50"
	I1108 10:28:18.045849 1190134 cri.go:89] found id: "e64d76a590f592ad5123ea146cba17cee655e4c302e7d2c00d65f628678c8146"
	I1108 10:28:18.045853 1190134 cri.go:89] found id: "6cf1df7c69fa46c783c4d0d0ed7275b2f7575903b38be95723c5fadb80a5adb2"
	I1108 10:28:18.045857 1190134 cri.go:89] found id: "7a08c37ef37992bde0d0bd0f71fdddbca47883b01dd90e96da703efd35f23fd8"
	I1108 10:28:18.045860 1190134 cri.go:89] found id: "4c21fbaf9d079fb5c4cbd03ca8e0149295b10f764ae1c6826063a0516b80ba46"
	I1108 10:28:18.045864 1190134 cri.go:89] found id: ""
	I1108 10:28:18.045931 1190134 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:28:18.061010 1190134 out.go:203] 
	W1108 10:28:18.064121 1190134 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:28:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:28:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:28:18.064139 1190134 out.go:285] * 
	* 
	W1108 10:28:18.073103 1190134 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:28:18.076107 1190134 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-343192 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-343192
helpers_test.go:243: (dbg) docker inspect pause-343192:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a",
	        "Created": "2025-11-08T10:26:29.280150982Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1184237,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:26:29.354017573Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a/hostname",
	        "HostsPath": "/var/lib/docker/containers/b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a/hosts",
	        "LogPath": "/var/lib/docker/containers/b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a/b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a-json.log",
	        "Name": "/pause-343192",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-343192:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-343192",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a",
	                "LowerDir": "/var/lib/docker/overlay2/5b62cf98731e9c9fbbaebf9242d274508371b43f530d1daee79cccee16fc9915-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b62cf98731e9c9fbbaebf9242d274508371b43f530d1daee79cccee16fc9915/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b62cf98731e9c9fbbaebf9242d274508371b43f530d1daee79cccee16fc9915/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b62cf98731e9c9fbbaebf9242d274508371b43f530d1daee79cccee16fc9915/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-343192",
	                "Source": "/var/lib/docker/volumes/pause-343192/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-343192",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-343192",
	                "name.minikube.sigs.k8s.io": "pause-343192",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b36c1b5ee1d026b8ad6a8d8a633e5415b17f056b894281aa1469ed9e63e8d8b1",
	            "SandboxKey": "/var/run/docker/netns/b36c1b5ee1d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34482"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34483"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34486"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34484"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34485"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-343192": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:0b:50:d9:3e:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "af705cfd21d261f64ffae0a47851a33e50f9d449ae94c054706bbe7bdf083c91",
	                    "EndpointID": "10b573f0bfaa64dcc67f395a7011be827eae0e8759453d900c42c4da393ad2ba",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-343192",
	                        "b390adacb4f3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-343192 -n pause-343192
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-343192 -n pause-343192: exit status 2 (344.055967ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-343192 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-343192 logs -n 25: (1.355576442s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-012922 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:22 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p missing-upgrade-625347 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-625347    │ jenkins │ v1.32.0 │ 08 Nov 25 10:22 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-012922 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ delete  │ -p NoKubernetes-012922                                                                                                                   │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-012922 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ ssh     │ -p NoKubernetes-012922 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │                     │
	│ stop    │ -p NoKubernetes-012922                                                                                                                   │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-012922 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p missing-upgrade-625347 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-625347    │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:24 UTC │
	│ ssh     │ -p NoKubernetes-012922 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │                     │
	│ delete  │ -p NoKubernetes-012922                                                                                                                   │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-666491 │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:24 UTC │
	│ delete  │ -p missing-upgrade-625347                                                                                                                │ missing-upgrade-625347    │ jenkins │ v1.37.0 │ 08 Nov 25 10:24 UTC │ 08 Nov 25 10:24 UTC │
	│ stop    │ -p kubernetes-upgrade-666491                                                                                                             │ kubernetes-upgrade-666491 │ jenkins │ v1.37.0 │ 08 Nov 25 10:24 UTC │ 08 Nov 25 10:24 UTC │
	│ start   │ -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-666491 │ jenkins │ v1.37.0 │ 08 Nov 25 10:24 UTC │                     │
	│ start   │ -p stopped-upgrade-660964 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-660964    │ jenkins │ v1.32.0 │ 08 Nov 25 10:24 UTC │ 08 Nov 25 10:25 UTC │
	│ stop    │ stopped-upgrade-660964 stop                                                                                                              │ stopped-upgrade-660964    │ jenkins │ v1.32.0 │ 08 Nov 25 10:25 UTC │ 08 Nov 25 10:25 UTC │
	│ start   │ -p stopped-upgrade-660964 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-660964    │ jenkins │ v1.37.0 │ 08 Nov 25 10:25 UTC │ 08 Nov 25 10:25 UTC │
	│ delete  │ -p stopped-upgrade-660964                                                                                                                │ stopped-upgrade-660964    │ jenkins │ v1.37.0 │ 08 Nov 25 10:25 UTC │ 08 Nov 25 10:25 UTC │
	│ start   │ -p running-upgrade-980073 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-980073    │ jenkins │ v1.32.0 │ 08 Nov 25 10:25 UTC │ 08 Nov 25 10:26 UTC │
	│ start   │ -p running-upgrade-980073 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-980073    │ jenkins │ v1.37.0 │ 08 Nov 25 10:26 UTC │ 08 Nov 25 10:26 UTC │
	│ delete  │ -p running-upgrade-980073                                                                                                                │ running-upgrade-980073    │ jenkins │ v1.37.0 │ 08 Nov 25 10:26 UTC │ 08 Nov 25 10:26 UTC │
	│ start   │ -p pause-343192 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-343192              │ jenkins │ v1.37.0 │ 08 Nov 25 10:26 UTC │ 08 Nov 25 10:27 UTC │
	│ start   │ -p pause-343192 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-343192              │ jenkins │ v1.37.0 │ 08 Nov 25 10:27 UTC │ 08 Nov 25 10:28 UTC │
	│ pause   │ -p pause-343192 --alsologtostderr -v=5                                                                                                   │ pause-343192              │ jenkins │ v1.37.0 │ 08 Nov 25 10:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:27:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:27:45.977606 1188449 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:27:45.977781 1188449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:27:45.977795 1188449 out.go:374] Setting ErrFile to fd 2...
	I1108 10:27:45.977801 1188449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:27:45.978100 1188449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:27:45.978498 1188449 out.go:368] Setting JSON to false
	I1108 10:27:45.979515 1188449 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33011,"bootTime":1762564655,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:27:45.979597 1188449 start.go:143] virtualization:  
	I1108 10:27:45.983630 1188449 out.go:179] * [pause-343192] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:27:45.986574 1188449 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:27:45.986685 1188449 notify.go:221] Checking for updates...
	I1108 10:27:45.992571 1188449 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:27:45.995507 1188449 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:27:45.998483 1188449 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:27:46.001421 1188449 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:27:46.004689 1188449 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:27:46.008569 1188449 config.go:182] Loaded profile config "pause-343192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:27:46.009183 1188449 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:27:46.045902 1188449 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:27:46.046019 1188449 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:27:46.108854 1188449 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:27:46.098725493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:27:46.108966 1188449 docker.go:319] overlay module found
	I1108 10:27:46.112168 1188449 out.go:179] * Using the docker driver based on existing profile
	I1108 10:27:46.115038 1188449 start.go:309] selected driver: docker
	I1108 10:27:46.115063 1188449 start.go:930] validating driver "docker" against &{Name:pause-343192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-343192 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:27:46.115234 1188449 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:27:46.115339 1188449 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:27:46.177826 1188449 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:27:46.167901341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:27:46.178290 1188449 cni.go:84] Creating CNI manager for ""
	I1108 10:27:46.178349 1188449 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:27:46.178433 1188449 start.go:353] cluster config:
	{Name:pause-343192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-343192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:27:46.183846 1188449 out.go:179] * Starting "pause-343192" primary control-plane node in "pause-343192" cluster
	I1108 10:27:46.186707 1188449 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:27:46.189579 1188449 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:27:46.192412 1188449 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:27:46.192483 1188449 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:27:46.192488 1188449 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:27:46.192495 1188449 cache.go:59] Caching tarball of preloaded images
	I1108 10:27:46.192584 1188449 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:27:46.192594 1188449 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:27:46.192749 1188449 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/config.json ...
	I1108 10:27:46.211100 1188449 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:27:46.211123 1188449 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:27:46.211141 1188449 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:27:46.211163 1188449 start.go:360] acquireMachinesLock for pause-343192: {Name:mk5a19317988718a71345d25975ea9a0c5d84756 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:27:46.211220 1188449 start.go:364] duration metric: took 35.494µs to acquireMachinesLock for "pause-343192"
	I1108 10:27:46.211249 1188449 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:27:46.211257 1188449 fix.go:54] fixHost starting: 
	I1108 10:27:46.211514 1188449 cli_runner.go:164] Run: docker container inspect pause-343192 --format={{.State.Status}}
	I1108 10:27:46.230116 1188449 fix.go:112] recreateIfNeeded on pause-343192: state=Running err=<nil>
	W1108 10:27:46.230143 1188449 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 10:27:46.530899 1173175 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:27:46.531346 1173175 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:27:46.531391 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 10:27:46.531444 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 10:27:46.559466 1173175 cri.go:89] found id: "8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:46.559485 1173175 cri.go:89] found id: ""
	I1108 10:27:46.559493 1173175 logs.go:282] 1 containers: [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98]
	I1108 10:27:46.559557 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:46.563238 1173175 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 10:27:46.563323 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 10:27:46.600335 1173175 cri.go:89] found id: ""
	I1108 10:27:46.600362 1173175 logs.go:282] 0 containers: []
	W1108 10:27:46.600371 1173175 logs.go:284] No container was found matching "etcd"
	I1108 10:27:46.600377 1173175 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 10:27:46.600463 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 10:27:46.639292 1173175 cri.go:89] found id: ""
	I1108 10:27:46.639318 1173175 logs.go:282] 0 containers: []
	W1108 10:27:46.639326 1173175 logs.go:284] No container was found matching "coredns"
	I1108 10:27:46.639332 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 10:27:46.639395 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 10:27:46.679383 1173175 cri.go:89] found id: "1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:46.679410 1173175 cri.go:89] found id: ""
	I1108 10:27:46.679421 1173175 logs.go:282] 1 containers: [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412]
	I1108 10:27:46.679486 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:46.683141 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 10:27:46.683222 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 10:27:46.722718 1173175 cri.go:89] found id: ""
	I1108 10:27:46.722743 1173175 logs.go:282] 0 containers: []
	W1108 10:27:46.722752 1173175 logs.go:284] No container was found matching "kube-proxy"
	I1108 10:27:46.722758 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 10:27:46.722815 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 10:27:46.753147 1173175 cri.go:89] found id: "1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:46.753172 1173175 cri.go:89] found id: ""
	I1108 10:27:46.753181 1173175 logs.go:282] 1 containers: [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c]
	I1108 10:27:46.753234 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:46.756669 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 10:27:46.756740 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 10:27:46.783459 1173175 cri.go:89] found id: ""
	I1108 10:27:46.783485 1173175 logs.go:282] 0 containers: []
	W1108 10:27:46.783494 1173175 logs.go:284] No container was found matching "kindnet"
	I1108 10:27:46.783500 1173175 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 10:27:46.783558 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 10:27:46.849797 1173175 cri.go:89] found id: ""
	I1108 10:27:46.849820 1173175 logs.go:282] 0 containers: []
	W1108 10:27:46.849828 1173175 logs.go:284] No container was found matching "storage-provisioner"
	I1108 10:27:46.849837 1173175 logs.go:123] Gathering logs for container status ...
	I1108 10:27:46.849850 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 10:27:46.900945 1173175 logs.go:123] Gathering logs for kubelet ...
	I1108 10:27:46.900970 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 10:27:47.027928 1173175 logs.go:123] Gathering logs for dmesg ...
	I1108 10:27:47.028004 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 10:27:47.047771 1173175 logs.go:123] Gathering logs for describe nodes ...
	I1108 10:27:47.047806 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 10:27:47.122201 1173175 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 10:27:47.122274 1173175 logs.go:123] Gathering logs for kube-apiserver [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98] ...
	I1108 10:27:47.122294 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:47.170318 1173175 logs.go:123] Gathering logs for kube-scheduler [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412] ...
	I1108 10:27:47.170349 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:47.264805 1173175 logs.go:123] Gathering logs for kube-controller-manager [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c] ...
	I1108 10:27:47.264833 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:47.304324 1173175 logs.go:123] Gathering logs for CRI-O ...
	I1108 10:27:47.304352 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 10:27:46.233234 1188449 out.go:252] * Updating the running docker "pause-343192" container ...
	I1108 10:27:46.233268 1188449 machine.go:94] provisionDockerMachine start ...
	I1108 10:27:46.233355 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:46.251182 1188449 main.go:143] libmachine: Using SSH client type: native
	I1108 10:27:46.251515 1188449 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34482 <nil> <nil>}
	I1108 10:27:46.251532 1188449 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:27:46.411877 1188449 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-343192
	
	I1108 10:27:46.411906 1188449 ubuntu.go:182] provisioning hostname "pause-343192"
	I1108 10:27:46.412007 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:46.430365 1188449 main.go:143] libmachine: Using SSH client type: native
	I1108 10:27:46.430669 1188449 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34482 <nil> <nil>}
	I1108 10:27:46.430686 1188449 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-343192 && echo "pause-343192" | sudo tee /etc/hostname
	I1108 10:27:46.602798 1188449 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-343192
	
	I1108 10:27:46.602869 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:46.628667 1188449 main.go:143] libmachine: Using SSH client type: native
	I1108 10:27:46.629033 1188449 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34482 <nil> <nil>}
	I1108 10:27:46.629050 1188449 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-343192' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-343192/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-343192' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:27:46.796382 1188449 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:27:46.796413 1188449 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:27:46.796498 1188449 ubuntu.go:190] setting up certificates
	I1108 10:27:46.796508 1188449 provision.go:84] configureAuth start
	I1108 10:27:46.796568 1188449 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-343192
	I1108 10:27:46.822023 1188449 provision.go:143] copyHostCerts
	I1108 10:27:46.822087 1188449 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:27:46.822097 1188449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:27:46.822173 1188449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:27:46.822267 1188449 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:27:46.822273 1188449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:27:46.822299 1188449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:27:46.822348 1188449 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:27:46.822352 1188449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:27:46.822375 1188449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:27:46.822423 1188449 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.pause-343192 san=[127.0.0.1 192.168.85.2 localhost minikube pause-343192]
	I1108 10:27:47.003469 1188449 provision.go:177] copyRemoteCerts
	I1108 10:27:47.003548 1188449 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:27:47.003610 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:47.032949 1188449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34482 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/pause-343192/id_rsa Username:docker}
	I1108 10:27:47.149230 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:27:47.184876 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:27:47.210834 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1108 10:27:47.241542 1188449 provision.go:87] duration metric: took 445.014907ms to configureAuth
	I1108 10:27:47.241567 1188449 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:27:47.241784 1188449 config.go:182] Loaded profile config "pause-343192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:27:47.241897 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:47.263211 1188449 main.go:143] libmachine: Using SSH client type: native
	I1108 10:27:47.263612 1188449 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34482 <nil> <nil>}
	I1108 10:27:47.263751 1188449 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:27:49.881560 1173175 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:27:49.882026 1173175 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:27:49.882102 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 10:27:49.882182 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 10:27:49.913451 1173175 cri.go:89] found id: "8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:49.913471 1173175 cri.go:89] found id: ""
	I1108 10:27:49.913479 1173175 logs.go:282] 1 containers: [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98]
	I1108 10:27:49.913544 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:49.917303 1173175 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 10:27:49.917383 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 10:27:49.943688 1173175 cri.go:89] found id: ""
	I1108 10:27:49.943716 1173175 logs.go:282] 0 containers: []
	W1108 10:27:49.943725 1173175 logs.go:284] No container was found matching "etcd"
	I1108 10:27:49.943731 1173175 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 10:27:49.943789 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 10:27:49.968772 1173175 cri.go:89] found id: ""
	I1108 10:27:49.968796 1173175 logs.go:282] 0 containers: []
	W1108 10:27:49.968805 1173175 logs.go:284] No container was found matching "coredns"
	I1108 10:27:49.968811 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 10:27:49.968869 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 10:27:49.995180 1173175 cri.go:89] found id: "1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:49.995202 1173175 cri.go:89] found id: ""
	I1108 10:27:49.995211 1173175 logs.go:282] 1 containers: [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412]
	I1108 10:27:49.995266 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:49.999069 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 10:27:49.999142 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 10:27:50.034370 1173175 cri.go:89] found id: ""
	I1108 10:27:50.034397 1173175 logs.go:282] 0 containers: []
	W1108 10:27:50.034406 1173175 logs.go:284] No container was found matching "kube-proxy"
	I1108 10:27:50.034413 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 10:27:50.034482 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 10:27:50.065037 1173175 cri.go:89] found id: "1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:50.065061 1173175 cri.go:89] found id: ""
	I1108 10:27:50.065070 1173175 logs.go:282] 1 containers: [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c]
	I1108 10:27:50.065126 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:50.068995 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 10:27:50.069071 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 10:27:50.097193 1173175 cri.go:89] found id: ""
	I1108 10:27:50.097221 1173175 logs.go:282] 0 containers: []
	W1108 10:27:50.097230 1173175 logs.go:284] No container was found matching "kindnet"
	I1108 10:27:50.097238 1173175 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 10:27:50.097301 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 10:27:50.124701 1173175 cri.go:89] found id: ""
	I1108 10:27:50.124726 1173175 logs.go:282] 0 containers: []
	W1108 10:27:50.124735 1173175 logs.go:284] No container was found matching "storage-provisioner"
	I1108 10:27:50.124744 1173175 logs.go:123] Gathering logs for kube-controller-manager [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c] ...
	I1108 10:27:50.124775 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:50.158775 1173175 logs.go:123] Gathering logs for CRI-O ...
	I1108 10:27:50.158803 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 10:27:50.213271 1173175 logs.go:123] Gathering logs for container status ...
	I1108 10:27:50.213307 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 10:27:50.244020 1173175 logs.go:123] Gathering logs for kubelet ...
	I1108 10:27:50.244047 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 10:27:50.359123 1173175 logs.go:123] Gathering logs for dmesg ...
	I1108 10:27:50.359159 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 10:27:50.377451 1173175 logs.go:123] Gathering logs for describe nodes ...
	I1108 10:27:50.377486 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 10:27:50.441907 1173175 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 10:27:50.441926 1173175 logs.go:123] Gathering logs for kube-apiserver [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98] ...
	I1108 10:27:50.441940 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:50.474355 1173175 logs.go:123] Gathering logs for kube-scheduler [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412] ...
	I1108 10:27:50.474387 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:53.035315 1173175 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:27:53.035749 1173175 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:27:53.035794 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 10:27:53.035851 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 10:27:53.068665 1173175 cri.go:89] found id: "8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:53.068685 1173175 cri.go:89] found id: ""
	I1108 10:27:53.068693 1173175 logs.go:282] 1 containers: [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98]
	I1108 10:27:53.068748 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:53.072689 1173175 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 10:27:53.072762 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 10:27:53.109777 1173175 cri.go:89] found id: ""
	I1108 10:27:53.109802 1173175 logs.go:282] 0 containers: []
	W1108 10:27:53.109810 1173175 logs.go:284] No container was found matching "etcd"
	I1108 10:27:53.109816 1173175 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 10:27:53.109873 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 10:27:53.149242 1173175 cri.go:89] found id: ""
	I1108 10:27:53.149266 1173175 logs.go:282] 0 containers: []
	W1108 10:27:53.149275 1173175 logs.go:284] No container was found matching "coredns"
	I1108 10:27:53.149281 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 10:27:53.149341 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 10:27:53.188188 1173175 cri.go:89] found id: "1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:53.188211 1173175 cri.go:89] found id: ""
	I1108 10:27:53.188219 1173175 logs.go:282] 1 containers: [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412]
	I1108 10:27:53.188275 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:53.192891 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 10:27:53.192967 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 10:27:53.230575 1173175 cri.go:89] found id: ""
	I1108 10:27:53.230608 1173175 logs.go:282] 0 containers: []
	W1108 10:27:53.230618 1173175 logs.go:284] No container was found matching "kube-proxy"
	I1108 10:27:53.230624 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 10:27:53.230679 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 10:27:53.262411 1173175 cri.go:89] found id: "1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:53.262438 1173175 cri.go:89] found id: ""
	I1108 10:27:53.262447 1173175 logs.go:282] 1 containers: [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c]
	I1108 10:27:53.262502 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:53.266295 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 10:27:53.266361 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 10:27:53.299542 1173175 cri.go:89] found id: ""
	I1108 10:27:53.299583 1173175 logs.go:282] 0 containers: []
	W1108 10:27:53.299592 1173175 logs.go:284] No container was found matching "kindnet"
	I1108 10:27:53.299598 1173175 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 10:27:53.299666 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 10:27:53.334416 1173175 cri.go:89] found id: ""
	I1108 10:27:53.334444 1173175 logs.go:282] 0 containers: []
	W1108 10:27:53.334452 1173175 logs.go:284] No container was found matching "storage-provisioner"
	I1108 10:27:53.334462 1173175 logs.go:123] Gathering logs for kubelet ...
	I1108 10:27:53.334473 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 10:27:53.466663 1173175 logs.go:123] Gathering logs for dmesg ...
	I1108 10:27:53.466699 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 10:27:53.487147 1173175 logs.go:123] Gathering logs for describe nodes ...
	I1108 10:27:53.487178 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 10:27:53.589677 1173175 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 10:27:53.589696 1173175 logs.go:123] Gathering logs for kube-apiserver [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98] ...
	I1108 10:27:53.589708 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:53.631372 1173175 logs.go:123] Gathering logs for kube-scheduler [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412] ...
	I1108 10:27:53.631442 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:53.701768 1173175 logs.go:123] Gathering logs for kube-controller-manager [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c] ...
	I1108 10:27:53.701847 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:53.740220 1173175 logs.go:123] Gathering logs for CRI-O ...
	I1108 10:27:53.740248 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 10:27:53.818012 1173175 logs.go:123] Gathering logs for container status ...
	I1108 10:27:53.818090 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 10:27:52.634362 1188449 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:27:52.634384 1188449 machine.go:97] duration metric: took 6.401107228s to provisionDockerMachine
	I1108 10:27:52.634396 1188449 start.go:293] postStartSetup for "pause-343192" (driver="docker")
	I1108 10:27:52.634406 1188449 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:27:52.634471 1188449 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:27:52.634516 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:52.651668 1188449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34482 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/pause-343192/id_rsa Username:docker}
	I1108 10:27:52.756078 1188449 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:27:52.759215 1188449 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:27:52.759248 1188449 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:27:52.759260 1188449 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:27:52.759317 1188449 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:27:52.759399 1188449 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:27:52.759500 1188449 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:27:52.766671 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:27:52.784137 1188449 start.go:296] duration metric: took 149.725049ms for postStartSetup
	I1108 10:27:52.784257 1188449 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:27:52.784325 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:52.800907 1188449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34482 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/pause-343192/id_rsa Username:docker}
	I1108 10:27:52.901891 1188449 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:27:52.906841 1188449 fix.go:56] duration metric: took 6.695575297s for fixHost
	I1108 10:27:52.906867 1188449 start.go:83] releasing machines lock for "pause-343192", held for 6.695628383s
	I1108 10:27:52.906960 1188449 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-343192
	I1108 10:27:52.924079 1188449 ssh_runner.go:195] Run: cat /version.json
	I1108 10:27:52.924113 1188449 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:27:52.924172 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:52.924187 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:52.946527 1188449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34482 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/pause-343192/id_rsa Username:docker}
	I1108 10:27:52.946554 1188449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34482 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/pause-343192/id_rsa Username:docker}
	I1108 10:27:53.150635 1188449 ssh_runner.go:195] Run: systemctl --version
	I1108 10:27:53.158177 1188449 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:27:53.217272 1188449 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:27:53.227331 1188449 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:27:53.227449 1188449 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:27:53.236985 1188449 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:27:53.237059 1188449 start.go:496] detecting cgroup driver to use...
	I1108 10:27:53.237104 1188449 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:27:53.237175 1188449 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:27:53.252498 1188449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:27:53.269822 1188449 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:27:53.269890 1188449 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:27:53.286414 1188449 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:27:53.300655 1188449 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:27:53.487645 1188449 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:27:53.659250 1188449 docker.go:234] disabling docker service ...
	I1108 10:27:53.659322 1188449 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:27:53.675341 1188449 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:27:53.689349 1188449 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:27:53.859426 1188449 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:27:54.015722 1188449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:27:54.031262 1188449 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:27:54.045873 1188449 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:27:54.045951 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.055150 1188449 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:27:54.055225 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.065284 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.075367 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.085090 1188449 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:27:54.094164 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.103419 1188449 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.112320 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.121596 1188449 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:27:54.129478 1188449 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:27:54.141868 1188449 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:27:54.274189 1188449 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:27:54.462766 1188449 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:27:54.462835 1188449 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:27:54.466740 1188449 start.go:564] Will wait 60s for crictl version
	I1108 10:27:54.466799 1188449 ssh_runner.go:195] Run: which crictl
	I1108 10:27:54.470252 1188449 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:27:54.503589 1188449 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:27:54.503736 1188449 ssh_runner.go:195] Run: crio --version
	I1108 10:27:54.532552 1188449 ssh_runner.go:195] Run: crio --version
	I1108 10:27:54.563110 1188449 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:27:54.566073 1188449 cli_runner.go:164] Run: docker network inspect pause-343192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:27:54.581612 1188449 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:27:54.585329 1188449 kubeadm.go:884] updating cluster {Name:pause-343192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-343192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:27:54.585473 1188449 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:27:54.585524 1188449 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:27:54.615964 1188449 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:27:54.615989 1188449 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:27:54.616043 1188449 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:27:54.644969 1188449 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:27:54.644992 1188449 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:27:54.645001 1188449 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 10:27:54.645105 1188449 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-343192 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-343192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:27:54.645181 1188449 ssh_runner.go:195] Run: crio config
	I1108 10:27:54.697689 1188449 cni.go:84] Creating CNI manager for ""
	I1108 10:27:54.697755 1188449 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:27:54.697778 1188449 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:27:54.697803 1188449 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-343192 NodeName:pause-343192 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:27:54.697935 1188449 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-343192"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:27:54.698008 1188449 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:27:54.705607 1188449 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:27:54.705676 1188449 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:27:54.712963 1188449 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1108 10:27:54.727187 1188449 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:27:54.740358 1188449 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1108 10:27:54.752993 1188449 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:27:54.756429 1188449 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:27:54.892269 1188449 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:27:54.905287 1188449 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192 for IP: 192.168.85.2
	I1108 10:27:54.905310 1188449 certs.go:195] generating shared ca certs ...
	I1108 10:27:54.905327 1188449 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:27:54.905540 1188449 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:27:54.905615 1188449 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:27:54.905629 1188449 certs.go:257] generating profile certs ...
	I1108 10:27:54.905732 1188449 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/client.key
	I1108 10:27:54.905807 1188449 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/apiserver.key.fbeb1480
	I1108 10:27:54.905859 1188449 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/proxy-client.key
	I1108 10:27:54.905977 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:27:54.906011 1188449 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:27:54.906024 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:27:54.906051 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:27:54.906078 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:27:54.906134 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:27:54.906180 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:27:54.906819 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:27:54.927058 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:27:54.947939 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:27:54.971559 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:27:54.993926 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 10:27:55.043413 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:27:55.067881 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:27:55.116241 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:27:55.174335 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:27:55.218452 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:27:55.274877 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:27:55.299004 1188449 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:27:55.315946 1188449 ssh_runner.go:195] Run: openssl version
	I1108 10:27:55.323502 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:27:55.333787 1188449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:27:55.338386 1188449 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:27:55.338508 1188449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:27:55.396385 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:27:55.406600 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:27:55.424523 1188449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:27:55.430030 1188449 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:27:55.430149 1188449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:27:55.497659 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:27:55.508951 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:27:55.518639 1188449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:27:55.522586 1188449 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:27:55.522649 1188449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:27:55.574982 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:27:55.583657 1188449 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:27:55.587683 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:27:55.629677 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:27:55.671489 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:27:55.723283 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:27:55.773098 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:27:55.814132 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:27:55.857624 1188449 kubeadm.go:401] StartCluster: {Name:pause-343192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-343192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:27:55.857741 1188449 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:27:55.857811 1188449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:27:55.888735 1188449 cri.go:89] found id: "0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f"
	I1108 10:27:55.888761 1188449 cri.go:89] found id: "f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981"
	I1108 10:27:55.888766 1188449 cri.go:89] found id: "cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb"
	I1108 10:27:55.888769 1188449 cri.go:89] found id: "d7a533806da9a332f1645c3360d5c4237a84729469ae7cb42a33daa107441f86"
	I1108 10:27:55.888774 1188449 cri.go:89] found id: "a2e998f95e3dabd458d90198ae4130a56a78b9685b3e0f821b670a31300781b6"
	I1108 10:27:55.888778 1188449 cri.go:89] found id: "808a055bd254c0bbbee4c3c751830708801f4ced02a2c5deb329197a434cd541"
	I1108 10:27:55.888782 1188449 cri.go:89] found id: "7036025861b31b3ce32c7deda2244e7cb402d4a8ef261e6ea3f8a57bb78fce01"
	I1108 10:27:55.888785 1188449 cri.go:89] found id: "1d28edcd8cca7648e1bc0b2fb042df7c5b1f90debfa5083af69296a4afa052d1"
	I1108 10:27:55.888788 1188449 cri.go:89] found id: "cbed26c9cc82d142d3d895dc7635d0efb73e033cb99b08450139b3c5de56c054"
	I1108 10:27:55.888796 1188449 cri.go:89] found id: "a327dc75a2da5df572b9729b0560d0810a03921afea0a1ea766f4032377a4d50"
	I1108 10:27:55.888802 1188449 cri.go:89] found id: "e64d76a590f592ad5123ea146cba17cee655e4c302e7d2c00d65f628678c8146"
	I1108 10:27:55.888806 1188449 cri.go:89] found id: "6cf1df7c69fa46c783c4d0d0ed7275b2f7575903b38be95723c5fadb80a5adb2"
	I1108 10:27:55.888809 1188449 cri.go:89] found id: "7a08c37ef37992bde0d0bd0f71fdddbca47883b01dd90e96da703efd35f23fd8"
	I1108 10:27:55.888812 1188449 cri.go:89] found id: "4c21fbaf9d079fb5c4cbd03ca8e0149295b10f764ae1c6826063a0516b80ba46"
	I1108 10:27:55.888816 1188449 cri.go:89] found id: ""
	I1108 10:27:55.888866 1188449 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:27:55.901846 1188449 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:27:55Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:27:55.901917 1188449 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:27:55.917033 1188449 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:27:55.917053 1188449 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:27:55.917105 1188449 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:27:55.928613 1188449 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:27:55.929237 1188449 kubeconfig.go:125] found "pause-343192" server: "https://192.168.85.2:8443"
	I1108 10:27:55.930028 1188449 kapi.go:59] client config for pause-343192: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/client.crt", KeyFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/client.key", CAFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21275c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 10:27:55.930507 1188449 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1108 10:27:55.930527 1188449 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1108 10:27:55.930533 1188449 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1108 10:27:55.930538 1188449 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1108 10:27:55.930545 1188449 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1108 10:27:55.930852 1188449 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:27:55.941642 1188449 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:27:55.941677 1188449 kubeadm.go:602] duration metric: took 24.618448ms to restartPrimaryControlPlane
	I1108 10:27:55.941686 1188449 kubeadm.go:403] duration metric: took 84.07357ms to StartCluster
	I1108 10:27:55.941703 1188449 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:27:55.941766 1188449 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:27:55.942627 1188449 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:27:55.942833 1188449 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:27:55.943161 1188449 config.go:182] Loaded profile config "pause-343192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:27:55.943207 1188449 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:27:55.947871 1188449 out.go:179] * Enabled addons: 
	I1108 10:27:55.947960 1188449 out.go:179] * Verifying Kubernetes components...
	I1108 10:27:55.950764 1188449 addons.go:515] duration metric: took 7.55534ms for enable addons: enabled=[]
	I1108 10:27:55.950848 1188449 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:27:56.382339 1173175 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:27:56.382733 1173175 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:27:56.382784 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 10:27:56.382841 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 10:27:56.425929 1173175 cri.go:89] found id: "8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:56.425951 1173175 cri.go:89] found id: ""
	I1108 10:27:56.425959 1173175 logs.go:282] 1 containers: [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98]
	I1108 10:27:56.426029 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:56.430034 1173175 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 10:27:56.430105 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 10:27:56.481528 1173175 cri.go:89] found id: ""
	I1108 10:27:56.481555 1173175 logs.go:282] 0 containers: []
	W1108 10:27:56.481564 1173175 logs.go:284] No container was found matching "etcd"
	I1108 10:27:56.481569 1173175 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 10:27:56.481629 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 10:27:56.528643 1173175 cri.go:89] found id: ""
	I1108 10:27:56.528671 1173175 logs.go:282] 0 containers: []
	W1108 10:27:56.528695 1173175 logs.go:284] No container was found matching "coredns"
	I1108 10:27:56.528702 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 10:27:56.528777 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 10:27:56.568272 1173175 cri.go:89] found id: "1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:56.568297 1173175 cri.go:89] found id: ""
	I1108 10:27:56.568306 1173175 logs.go:282] 1 containers: [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412]
	I1108 10:27:56.568360 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:56.572074 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 10:27:56.572152 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 10:27:56.612394 1173175 cri.go:89] found id: ""
	I1108 10:27:56.612414 1173175 logs.go:282] 0 containers: []
	W1108 10:27:56.612423 1173175 logs.go:284] No container was found matching "kube-proxy"
	I1108 10:27:56.612430 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 10:27:56.612528 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 10:27:56.663151 1173175 cri.go:89] found id: "1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:56.663172 1173175 cri.go:89] found id: ""
	I1108 10:27:56.663183 1173175 logs.go:282] 1 containers: [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c]
	I1108 10:27:56.663249 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:56.672762 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 10:27:56.672849 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 10:27:56.717759 1173175 cri.go:89] found id: ""
	I1108 10:27:56.717791 1173175 logs.go:282] 0 containers: []
	W1108 10:27:56.717799 1173175 logs.go:284] No container was found matching "kindnet"
	I1108 10:27:56.717806 1173175 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 10:27:56.717875 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 10:27:56.761256 1173175 cri.go:89] found id: ""
	I1108 10:27:56.761289 1173175 logs.go:282] 0 containers: []
	W1108 10:27:56.761298 1173175 logs.go:284] No container was found matching "storage-provisioner"
	I1108 10:27:56.761308 1173175 logs.go:123] Gathering logs for dmesg ...
	I1108 10:27:56.761319 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 10:27:56.783740 1173175 logs.go:123] Gathering logs for describe nodes ...
	I1108 10:27:56.783768 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 10:27:56.887768 1173175 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 10:27:56.887791 1173175 logs.go:123] Gathering logs for kube-apiserver [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98] ...
	I1108 10:27:56.887803 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:56.931725 1173175 logs.go:123] Gathering logs for kube-scheduler [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412] ...
	I1108 10:27:56.931754 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:57.050294 1173175 logs.go:123] Gathering logs for kube-controller-manager [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c] ...
	I1108 10:27:57.050338 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:57.086326 1173175 logs.go:123] Gathering logs for CRI-O ...
	I1108 10:27:57.086354 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 10:27:57.160549 1173175 logs.go:123] Gathering logs for container status ...
	I1108 10:27:57.160587 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 10:27:57.208107 1173175 logs.go:123] Gathering logs for kubelet ...
	I1108 10:27:57.208136 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 10:27:56.185150 1188449 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:27:56.201654 1188449 node_ready.go:35] waiting up to 6m0s for node "pause-343192" to be "Ready" ...
	I1108 10:28:00.900871 1188449 node_ready.go:49] node "pause-343192" is "Ready"
	I1108 10:28:00.900899 1188449 node_ready.go:38] duration metric: took 4.699164239s for node "pause-343192" to be "Ready" ...
	I1108 10:28:00.900913 1188449 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:28:00.900974 1188449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:28:00.917793 1188449 api_server.go:72] duration metric: took 4.974922945s to wait for apiserver process to appear ...
	I1108 10:28:00.917817 1188449 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:28:00.917837 1188449 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:28:00.966846 1188449 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:28:00.966942 1188449 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:27:59.863541 1173175 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:28:01.418483 1188449 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:28:01.427808 1188449 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:28:01.427838 1188449 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:28:01.918462 1188449 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:28:01.927653 1188449 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:28:01.928835 1188449 api_server.go:141] control plane version: v1.34.1
	I1108 10:28:01.928861 1188449 api_server.go:131] duration metric: took 1.011036679s to wait for apiserver health ...
	I1108 10:28:01.928871 1188449 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:28:01.933381 1188449 system_pods.go:59] 7 kube-system pods found
	I1108 10:28:01.933423 1188449 system_pods.go:61] "coredns-66bc5c9577-z4htg" [ccbca0f1-a4f6-4bdb-91f4-b4eb718ee497] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:28:01.933432 1188449 system_pods.go:61] "etcd-pause-343192" [e9dd9e24-4928-4921-baba-1e43583dec44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:28:01.933439 1188449 system_pods.go:61] "kindnet-5dl8w" [e6e7ac85-7324-4cb4-955e-95b1709547a2] Running
	I1108 10:28:01.933447 1188449 system_pods.go:61] "kube-apiserver-pause-343192" [aa6ba0e4-9923-46ed-b85f-5d6ba133f16a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:28:01.933456 1188449 system_pods.go:61] "kube-controller-manager-pause-343192" [0a07fd03-853c-4136-b7e1-a7331811ab39] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:28:01.933466 1188449 system_pods.go:61] "kube-proxy-774lt" [840433f1-4620-41e8-80eb-4190421a0b49] Running
	I1108 10:28:01.933475 1188449 system_pods.go:61] "kube-scheduler-pause-343192" [dc53200d-9f68-4dba-aa09-b0e0839beae5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:28:01.933483 1188449 system_pods.go:74] duration metric: took 4.604999ms to wait for pod list to return data ...
	I1108 10:28:01.933496 1188449 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:28:01.936076 1188449 default_sa.go:45] found service account: "default"
	I1108 10:28:01.936100 1188449 default_sa.go:55] duration metric: took 2.597213ms for default service account to be created ...
	I1108 10:28:01.936110 1188449 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:28:01.938932 1188449 system_pods.go:86] 7 kube-system pods found
	I1108 10:28:01.938964 1188449 system_pods.go:89] "coredns-66bc5c9577-z4htg" [ccbca0f1-a4f6-4bdb-91f4-b4eb718ee497] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:28:01.938994 1188449 system_pods.go:89] "etcd-pause-343192" [e9dd9e24-4928-4921-baba-1e43583dec44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:28:01.939010 1188449 system_pods.go:89] "kindnet-5dl8w" [e6e7ac85-7324-4cb4-955e-95b1709547a2] Running
	I1108 10:28:01.939019 1188449 system_pods.go:89] "kube-apiserver-pause-343192" [aa6ba0e4-9923-46ed-b85f-5d6ba133f16a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:28:01.939034 1188449 system_pods.go:89] "kube-controller-manager-pause-343192" [0a07fd03-853c-4136-b7e1-a7331811ab39] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:28:01.939039 1188449 system_pods.go:89] "kube-proxy-774lt" [840433f1-4620-41e8-80eb-4190421a0b49] Running
	I1108 10:28:01.939049 1188449 system_pods.go:89] "kube-scheduler-pause-343192" [dc53200d-9f68-4dba-aa09-b0e0839beae5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:28:01.939087 1188449 system_pods.go:126] duration metric: took 2.960098ms to wait for k8s-apps to be running ...
	I1108 10:28:01.939104 1188449 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:28:01.939170 1188449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:28:01.954623 1188449 system_svc.go:56] duration metric: took 15.505853ms WaitForService to wait for kubelet
	I1108 10:28:01.954654 1188449 kubeadm.go:587] duration metric: took 6.011788708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:28:01.954675 1188449 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:28:01.958495 1188449 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:28:01.958533 1188449 node_conditions.go:123] node cpu capacity is 2
	I1108 10:28:01.958547 1188449 node_conditions.go:105] duration metric: took 3.8657ms to run NodePressure ...
	I1108 10:28:01.958561 1188449 start.go:242] waiting for startup goroutines ...
	I1108 10:28:01.958569 1188449 start.go:247] waiting for cluster config update ...
	I1108 10:28:01.958576 1188449 start.go:256] writing updated cluster config ...
	I1108 10:28:01.958962 1188449 ssh_runner.go:195] Run: rm -f paused
	I1108 10:28:01.963567 1188449 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:28:01.964266 1188449 kapi.go:59] client config for pause-343192: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/client.crt", KeyFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/client.key", CAFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21275c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 10:28:01.967689 1188449 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z4htg" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:28:03.972839 1188449 pod_ready.go:104] pod "coredns-66bc5c9577-z4htg" is not "Ready", error: <nil>
	W1108 10:28:05.973632 1188449 pod_ready.go:104] pod "coredns-66bc5c9577-z4htg" is not "Ready", error: <nil>
	I1108 10:28:04.863861 1173175 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 10:28:04.863921 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 10:28:04.863987 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 10:28:04.890675 1173175 cri.go:89] found id: "f6b0773c68d746faa2430b80e04ecdde7ed1220310045bd0a7f4cafa3b838acf"
	I1108 10:28:04.890698 1173175 cri.go:89] found id: "8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:28:04.890703 1173175 cri.go:89] found id: ""
	I1108 10:28:04.890710 1173175 logs.go:282] 2 containers: [f6b0773c68d746faa2430b80e04ecdde7ed1220310045bd0a7f4cafa3b838acf 8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98]
	I1108 10:28:04.890770 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:28:04.894537 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:28:04.898476 1173175 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 10:28:04.898550 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 10:28:04.924379 1173175 cri.go:89] found id: ""
	I1108 10:28:04.924406 1173175 logs.go:282] 0 containers: []
	W1108 10:28:04.924415 1173175 logs.go:284] No container was found matching "etcd"
	I1108 10:28:04.924420 1173175 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 10:28:04.924524 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 10:28:04.952279 1173175 cri.go:89] found id: ""
	I1108 10:28:04.952306 1173175 logs.go:282] 0 containers: []
	W1108 10:28:04.952315 1173175 logs.go:284] No container was found matching "coredns"
	I1108 10:28:04.952321 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 10:28:04.952388 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 10:28:04.982154 1173175 cri.go:89] found id: "1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:28:04.982179 1173175 cri.go:89] found id: ""
	I1108 10:28:04.982188 1173175 logs.go:282] 1 containers: [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412]
	I1108 10:28:04.982249 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:28:04.986005 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 10:28:04.986078 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 10:28:05.014860 1173175 cri.go:89] found id: ""
	I1108 10:28:05.014885 1173175 logs.go:282] 0 containers: []
	W1108 10:28:05.014893 1173175 logs.go:284] No container was found matching "kube-proxy"
	I1108 10:28:05.014899 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 10:28:05.014961 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 10:28:05.044185 1173175 cri.go:89] found id: "1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:28:05.044207 1173175 cri.go:89] found id: ""
	I1108 10:28:05.044216 1173175 logs.go:282] 1 containers: [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c]
	I1108 10:28:05.044271 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:28:05.047888 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 10:28:05.047958 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 10:28:05.077825 1173175 cri.go:89] found id: ""
	I1108 10:28:05.077849 1173175 logs.go:282] 0 containers: []
	W1108 10:28:05.077858 1173175 logs.go:284] No container was found matching "kindnet"
	I1108 10:28:05.077864 1173175 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 10:28:05.077921 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 10:28:05.105475 1173175 cri.go:89] found id: ""
	I1108 10:28:05.105500 1173175 logs.go:282] 0 containers: []
	W1108 10:28:05.105522 1173175 logs.go:284] No container was found matching "storage-provisioner"
	I1108 10:28:05.105559 1173175 logs.go:123] Gathering logs for dmesg ...
	I1108 10:28:05.105578 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 10:28:05.124215 1173175 logs.go:123] Gathering logs for kube-apiserver [f6b0773c68d746faa2430b80e04ecdde7ed1220310045bd0a7f4cafa3b838acf] ...
	I1108 10:28:05.124247 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6b0773c68d746faa2430b80e04ecdde7ed1220310045bd0a7f4cafa3b838acf"
	I1108 10:28:05.160267 1173175 logs.go:123] Gathering logs for CRI-O ...
	I1108 10:28:05.160299 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 10:28:05.221433 1173175 logs.go:123] Gathering logs for container status ...
	I1108 10:28:05.221467 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 10:28:05.252220 1173175 logs.go:123] Gathering logs for describe nodes ...
	I1108 10:28:05.252251 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1108 10:28:07.473625 1188449 pod_ready.go:94] pod "coredns-66bc5c9577-z4htg" is "Ready"
	I1108 10:28:07.473656 1188449 pod_ready.go:86] duration metric: took 5.505940122s for pod "coredns-66bc5c9577-z4htg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:07.476262 1188449 pod_ready.go:83] waiting for pod "etcd-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:28:09.482983 1188449 pod_ready.go:104] pod "etcd-pause-343192" is not "Ready", error: <nil>
	W1108 10:28:11.981454 1188449 pod_ready.go:104] pod "etcd-pause-343192" is not "Ready", error: <nil>
	W1108 10:28:13.982098 1188449 pod_ready.go:104] pod "etcd-pause-343192" is not "Ready", error: <nil>
	I1108 10:28:14.981696 1188449 pod_ready.go:94] pod "etcd-pause-343192" is "Ready"
	I1108 10:28:14.981727 1188449 pod_ready.go:86] duration metric: took 7.505441455s for pod "etcd-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:14.984181 1188449 pod_ready.go:83] waiting for pod "kube-apiserver-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:14.988800 1188449 pod_ready.go:94] pod "kube-apiserver-pause-343192" is "Ready"
	I1108 10:28:14.988831 1188449 pod_ready.go:86] duration metric: took 4.624354ms for pod "kube-apiserver-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:14.991218 1188449 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:14.995325 1188449 pod_ready.go:94] pod "kube-controller-manager-pause-343192" is "Ready"
	I1108 10:28:14.995348 1188449 pod_ready.go:86] duration metric: took 4.104307ms for pod "kube-controller-manager-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:14.997650 1188449 pod_ready.go:83] waiting for pod "kube-proxy-774lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:15.180095 1188449 pod_ready.go:94] pod "kube-proxy-774lt" is "Ready"
	I1108 10:28:15.180119 1188449 pod_ready.go:86] duration metric: took 182.444472ms for pod "kube-proxy-774lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:15.380304 1188449 pod_ready.go:83] waiting for pod "kube-scheduler-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:15.780089 1188449 pod_ready.go:94] pod "kube-scheduler-pause-343192" is "Ready"
	I1108 10:28:15.780115 1188449 pod_ready.go:86] duration metric: took 399.782543ms for pod "kube-scheduler-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:15.780127 1188449 pod_ready.go:40] duration metric: took 13.816483238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:28:15.841245 1188449 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:28:15.844379 1188449 out.go:179] * Done! kubectl is now configured to use "pause-343192" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.114989858Z" level=info msg="Started container" PID=2244 containerID=a2e998f95e3dabd458d90198ae4130a56a78b9685b3e0f821b670a31300781b6 description=kube-system/etcd-pause-343192/etcd id=ae80516e-7b2b-4d51-a744-e7df56497b0b name=/runtime.v1.RuntimeService/StartContainer sandboxID=933868f3bd04b1fe383e4005366c51e0ba5af4a2beede9e23bd89c36f1ad0a1c
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.19141109Z" level=info msg="Created container f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981: kube-system/kube-controller-manager-pause-343192/kube-controller-manager" id=8fa6a36c-9113-44ca-a6f9-a28dd90bd418 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.192477434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.193087046Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.199979672Z" level=info msg="Starting container: f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981" id=1af01398-4c16-477e-a55c-8f3be6b83624 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.200243983Z" level=info msg="Started container" PID=2254 containerID=d7a533806da9a332f1645c3360d5c4237a84729469ae7cb42a33daa107441f86 description=kube-system/kube-scheduler-pause-343192/kube-scheduler id=e3195917-05e4-411b-a5b4-bff55e638640 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5de00ebc1a698b723db5732db4979d304e64f58f318310c8d3ddabc8e5571ea6
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.212270476Z" level=info msg="Started container" PID=2290 containerID=f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981 description=kube-system/kube-controller-manager-pause-343192/kube-controller-manager id=1af01398-4c16-477e-a55c-8f3be6b83624 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71da58ac41e9e6288ff3e71252c287fffc70241759981d7785f048ddee3efb5d
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.222137912Z" level=info msg="Created container cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb: kube-system/kube-proxy-774lt/kube-proxy" id=a14e5092-583d-4fd1-b8fa-1e491082bb5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.226717319Z" level=info msg="Starting container: cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb" id=76a8a652-f441-48d2-abba-4f6c80048c5a name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.240549569Z" level=info msg="Started container" PID=2281 containerID=cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb description=kube-system/kube-proxy-774lt/kube-proxy id=76a8a652-f441-48d2-abba-4f6c80048c5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=c68d6f30b5413edf011b9619cd4b4670850fad852e914baf85e06aac5a6b4dba
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.259557243Z" level=info msg="Created container 0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f: kube-system/kube-apiserver-pause-343192/kube-apiserver" id=6cf01fea-f2f2-4dc6-91a5-6eae5cbf4e44 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.260288911Z" level=info msg="Starting container: 0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f" id=258d5b4b-76cc-41f0-9a06-7d5c8763dca1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.262525931Z" level=info msg="Started container" PID=2329 containerID=0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f description=kube-system/kube-apiserver-pause-343192/kube-apiserver id=258d5b4b-76cc-41f0-9a06-7d5c8763dca1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8734a217dcf0f44f1976df5c61aede2dca74686bece4a74636e39bb2560ab553
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.419850348Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.423823827Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.424013876Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.424066436Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.42734988Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.427383913Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.427414033Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.430824053Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.430858037Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.430880715Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.43387241Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.433903605Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0295e0bdcf0cd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   23 seconds ago       Running             kube-apiserver            1                   8734a217dcf0f       kube-apiserver-pause-343192            kube-system
	f4c13d129af56       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   23 seconds ago       Running             kube-controller-manager   1                   71da58ac41e9e       kube-controller-manager-pause-343192   kube-system
	cf7214c160cf2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   23 seconds ago       Running             kube-proxy                1                   c68d6f30b5413       kube-proxy-774lt                       kube-system
	d7a533806da9a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   5de00ebc1a698       kube-scheduler-pause-343192            kube-system
	a2e998f95e3da       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   933868f3bd04b       etcd-pause-343192                      kube-system
	808a055bd254c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   e04012ef6226c       kindnet-5dl8w                          kube-system
	7036025861b31       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   5b50f6840c84a       coredns-66bc5c9577-z4htg               kube-system
	1d28edcd8cca7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   5b50f6840c84a       coredns-66bc5c9577-z4htg               kube-system
	cbed26c9cc82d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   c68d6f30b5413       kube-proxy-774lt                       kube-system
	a327dc75a2da5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   e04012ef6226c       kindnet-5dl8w                          kube-system
	e64d76a590f59       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   933868f3bd04b       etcd-pause-343192                      kube-system
	6cf1df7c69fa4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   5de00ebc1a698       kube-scheduler-pause-343192            kube-system
	7a08c37ef3799       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   71da58ac41e9e       kube-controller-manager-pause-343192   kube-system
	4c21fbaf9d079       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   8734a217dcf0f       kube-apiserver-pause-343192            kube-system
	
	
	==> coredns [1d28edcd8cca7648e1bc0b2fb042df7c5b1f90debfa5083af69296a4afa052d1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39444 - 21665 "HINFO IN 3237423409004509854.6412274812491590403. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019736976s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7036025861b31b3ce32c7deda2244e7cb402d4a8ef261e6ea3f8a57bb78fce01] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43838 - 48464 "HINFO IN 807073959000323933.6825489918620195988. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.039072833s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-343192
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-343192
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=pause-343192
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_26_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:26:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-343192
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:28:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:28:11 +0000   Sat, 08 Nov 2025 10:26:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:28:11 +0000   Sat, 08 Nov 2025 10:26:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:28:11 +0000   Sat, 08 Nov 2025 10:26:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:28:11 +0000   Sat, 08 Nov 2025 10:27:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-343192
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                b0412103-dbad-4614-89ab-45b015153528
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-z4htg                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-343192                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-5dl8w                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-343192             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-343192    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-774lt                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-343192             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 76s   kube-proxy       
	  Normal   Starting                 18s   kube-proxy       
	  Normal   Starting                 83s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 83s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s   kubelet          Node pause-343192 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s   kubelet          Node pause-343192 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s   kubelet          Node pause-343192 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s   node-controller  Node pause-343192 event: Registered Node pause-343192 in Controller
	  Normal   NodeReady                36s   kubelet          Node pause-343192 status is now: NodeReady
	  Normal   RegisteredNode           16s   node-controller  Node pause-343192 event: Registered Node pause-343192 in Controller
	
	
	==> dmesg <==
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[  +3.322852] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[ +18.943896] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:09] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[ +18.424643] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a2e998f95e3dabd458d90198ae4130a56a78b9685b3e0f821b670a31300781b6] <==
	{"level":"warn","ts":"2025-11-08T10:27:57.818476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.849149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.869470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.898206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.919791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.975880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.999387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.029247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.050299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.079074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.124105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.162183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.197714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.251436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.277611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.327996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.376723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.421306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.450785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.498853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.552314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.613176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.655606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.688649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.890276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48792","server-name":"","error":"EOF"}
	
	
	==> etcd [e64d76a590f592ad5123ea146cba17cee655e4c302e7d2c00d65f628678c8146] <==
	{"level":"warn","ts":"2025-11-08T10:26:53.027945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.045825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.063170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.114398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.132301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.160013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.284668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45322","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T10:27:47.466363Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-08T10:27:47.466435Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-343192","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-08T10:27:47.466531Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T10:27:47.609617Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T10:27:47.609699Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T10:27:47.609722Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-08T10:27:47.609801Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-08T10:27:47.609843Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T10:27:47.609880Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T10:27:47.609891Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T10:27:47.609861Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-08T10:27:47.609929Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T10:27:47.609959Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T10:27:47.609967Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T10:27:47.613303Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-08T10:27:47.613383Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T10:27:47.613411Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:27:47.613429Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-343192","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 10:28:19 up  9:10,  0 user,  load average: 1.87, 2.71, 2.39
	Linux pause-343192 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [808a055bd254c0bbbee4c3c751830708801f4ced02a2c5deb329197a434cd541] <==
	I1108 10:27:55.221536       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:27:55.221730       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:27:55.221853       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:27:55.221870       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:27:55.221883       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:27:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:27:55.416965       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:27:55.425732       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:27:55.425849       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:27:55.426644       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 10:28:01.026723       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:28:01.026828       1 metrics.go:72] Registering metrics
	I1108 10:28:01.026941       1 controller.go:711] "Syncing nftables rules"
	I1108 10:28:05.419458       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:28:05.419519       1 main.go:301] handling current node
	I1108 10:28:15.416893       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:28:15.417014       1 main.go:301] handling current node
	
	
	==> kindnet [a327dc75a2da5df572b9729b0560d0810a03921afea0a1ea766f4032377a4d50] <==
	I1108 10:27:02.610240       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:27:02.610599       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:27:02.610754       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:27:02.610826       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:27:02.610860       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:27:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:27:02.809922       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:27:02.809951       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:27:02.810003       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:27:02.811559       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:27:32.812885       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:27:32.813012       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:27:32.813017       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:27:32.813115       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:27:34.310478       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:27:34.310602       1 metrics.go:72] Registering metrics
	I1108 10:27:34.310721       1 controller.go:711] "Syncing nftables rules"
	I1108 10:27:42.809649       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:27:42.809706       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f] <==
	I1108 10:28:00.857250       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:28:00.885911       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:28:00.895548       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:28:00.895587       1 policy_source.go:240] refreshing policies
	I1108 10:28:00.910004       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:28:00.925751       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:28:00.946371       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:28:00.947113       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 10:28:00.947223       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:28:00.947285       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:28:00.948538       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 10:28:00.951071       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:28:00.962345       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:28:00.948586       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:28:00.948595       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:28:00.948944       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:28:00.970941       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1108 10:28:00.969444       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:28:00.985187       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:28:01.464131       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:28:02.084487       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:28:03.487690       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:28:03.779098       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:28:03.831655       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:28:03.878861       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [4c21fbaf9d079fb5c4cbd03ca8e0149295b10f764ae1c6826063a0516b80ba46] <==
	W1108 10:27:47.483871       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.483912       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.483954       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484057       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484140       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484218       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484291       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484342       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484412       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484508       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484574       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484645       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484704       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484755       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484822       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484892       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484973       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485057       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485162       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485224       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485280       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485428       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485506       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485577       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7a08c37ef37992bde0d0bd0f71fdddbca47883b01dd90e96da703efd35f23fd8] <==
	I1108 10:27:01.237273       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:27:01.240666       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-343192" podCIDRs=["10.244.0.0/24"]
	I1108 10:27:01.242074       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:27:01.250348       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:27:01.254525       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 10:27:01.259092       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 10:27:01.261356       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:27:01.268534       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:27:01.268645       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:27:01.268791       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:27:01.268900       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:27:01.269310       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:27:01.269883       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:27:01.269966       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 10:27:01.269978       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:27:01.269994       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 10:27:01.271344       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:27:01.271404       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:27:01.272629       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:27:01.272699       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 10:27:01.272711       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:27:01.272719       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:27:01.273151       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:27:01.292759       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:27:46.230277       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981] <==
	I1108 10:28:03.476613       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 10:28:03.476653       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:28:03.476522       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:28:03.476716       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 10:28:03.476763       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:28:03.476877       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:28:03.476987       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-343192"
	I1108 10:28:03.476651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 10:28:03.477130       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:28:03.486587       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:28:03.486619       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:28:03.486627       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:28:03.486717       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:28:03.488197       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:28:03.491331       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 10:28:03.491425       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 10:28:03.491447       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 10:28:03.491463       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 10:28:03.491469       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 10:28:03.497996       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:28:03.514283       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:28:03.514255       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:28:03.515071       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:28:03.522542       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:28:03.523807       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cbed26c9cc82d142d3d895dc7635d0efb73e033cb99b08450139b3c5de56c054] <==
	I1108 10:27:02.593489       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:27:02.694365       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:27:02.795358       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:27:02.795395       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:27:02.795471       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:27:02.879224       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:27:02.879280       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:27:02.891902       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:27:02.892201       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:27:02.892224       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:27:02.893739       1 config.go:200] "Starting service config controller"
	I1108 10:27:02.893769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:27:02.893787       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:27:02.893791       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:27:02.893801       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:27:02.893804       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:27:02.894394       1 config.go:309] "Starting node config controller"
	I1108 10:27:02.894412       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:27:02.894418       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:27:02.993925       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:27:02.993954       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 10:27:02.993935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb] <==
	I1108 10:27:57.397150       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:27:59.126795       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:28:01.061986       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:28:01.062031       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:28:01.063024       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:28:01.106889       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:28:01.107009       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:28:01.117065       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:28:01.117416       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:28:01.117433       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:28:01.125284       1 config.go:200] "Starting service config controller"
	I1108 10:28:01.125369       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:28:01.125409       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:28:01.125437       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:28:01.125476       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:28:01.125502       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:28:01.126472       1 config.go:309] "Starting node config controller"
	I1108 10:28:01.126529       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:28:01.126537       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:28:01.226475       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:28:01.226587       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:28:01.226602       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6cf1df7c69fa46c783c4d0d0ed7275b2f7575903b38be95723c5fadb80a5adb2] <==
	E1108 10:26:54.623148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:26:54.623182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:26:54.623237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:26:54.623281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:26:54.623325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:26:54.626628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:26:54.631099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:26:54.631187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:26:54.631246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:26:54.631308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:26:54.631416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:26:54.631465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:26:54.631507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:26:54.631573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:26:54.635156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:26:54.635234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:26:55.460090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:26:55.633247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1108 10:26:58.106700       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:27:47.465103       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1108 10:27:47.465132       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1108 10:27:47.465168       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1108 10:27:47.465209       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:27:47.465960       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1108 10:27:47.466000       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d7a533806da9a332f1645c3360d5c4237a84729469ae7cb42a33daa107441f86] <==
	I1108 10:27:57.204803       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:28:00.980856       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:28:00.980891       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:28:00.999852       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:28:01.000025       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:28:01.000082       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:28:01.000152       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:28:01.001745       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:28:01.006822       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:28:01.006929       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:28:01.006965       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:28:01.100647       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:28:01.107408       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:28:01.107533       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:27:54 pause-343192 kubelet[1316]: E1108 10:27:54.974817    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z4htg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ccbca0f1-a4f6-4bdb-91f4-b4eb718ee497" pod="kube-system/coredns-66bc5c9577-z4htg"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: I1108 10:27:55.006988    1316 scope.go:117] "RemoveContainer" containerID="4c21fbaf9d079fb5c4cbd03ca8e0149295b10f764ae1c6826063a0516b80ba46"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.007931    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z4htg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ccbca0f1-a4f6-4bdb-91f4-b4eb718ee497" pod="kube-system/coredns-66bc5c9577-z4htg"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.008480    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a017b135be5dbd8844db0dbb7371c28d" pod="kube-system/etcd-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.008797    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="80cc6e29e92adf75398fa57125331d6f" pod="kube-system/kube-scheduler-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.009107    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0fb6837cb311c408ba2c0a7149a4c333" pod="kube-system/kube-apiserver-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.009390    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a331d88b5b4a93099e5c6ac0fa526396" pod="kube-system/kube-controller-manager-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.009664    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-5dl8w\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e6e7ac85-7324-4cb4-955e-95b1709547a2" pod="kube-system/kindnet-5dl8w"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: I1108 10:27:55.012883    1316 scope.go:117] "RemoveContainer" containerID="cbed26c9cc82d142d3d895dc7635d0efb73e033cb99b08450139b3c5de56c054"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.016144    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-5dl8w\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e6e7ac85-7324-4cb4-955e-95b1709547a2" pod="kube-system/kindnet-5dl8w"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.016463    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-774lt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="840433f1-4620-41e8-80eb-4190421a0b49" pod="kube-system/kube-proxy-774lt"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.022733    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z4htg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ccbca0f1-a4f6-4bdb-91f4-b4eb718ee497" pod="kube-system/coredns-66bc5c9577-z4htg"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.024868    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a017b135be5dbd8844db0dbb7371c28d" pod="kube-system/etcd-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.025271    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="80cc6e29e92adf75398fa57125331d6f" pod="kube-system/kube-scheduler-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.025566    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0fb6837cb311c408ba2c0a7149a4c333" pod="kube-system/kube-apiserver-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.025867    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a331d88b5b4a93099e5c6ac0fa526396" pod="kube-system/kube-controller-manager-pause-343192"
	Nov 08 10:28:00 pause-343192 kubelet[1316]: E1108 10:28:00.668630    1316 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-343192\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-343192' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 08 10:28:00 pause-343192 kubelet[1316]: E1108 10:28:00.669148    1316 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-343192\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-343192' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 08 10:28:00 pause-343192 kubelet[1316]: E1108 10:28:00.669270    1316 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-343192\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-343192' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 08 10:28:00 pause-343192 kubelet[1316]: E1108 10:28:00.681119    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-343192\" is forbidden: User \"system:node:pause-343192\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-343192' and this object" podUID="a017b135be5dbd8844db0dbb7371c28d" pod="kube-system/etcd-pause-343192"
	Nov 08 10:28:00 pause-343192 kubelet[1316]: E1108 10:28:00.801550    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-343192\" is forbidden: User \"system:node:pause-343192\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-343192' and this object" podUID="80cc6e29e92adf75398fa57125331d6f" pod="kube-system/kube-scheduler-pause-343192"
	Nov 08 10:28:06 pause-343192 kubelet[1316]: W1108 10:28:06.947523    1316 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 08 10:28:16 pause-343192 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:28:16 pause-343192 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:28:16 pause-343192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-343192 -n pause-343192
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-343192 -n pause-343192: exit status 2 (359.398691ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-343192 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-343192
helpers_test.go:243: (dbg) docker inspect pause-343192:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a",
	        "Created": "2025-11-08T10:26:29.280150982Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1184237,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:26:29.354017573Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a/hostname",
	        "HostsPath": "/var/lib/docker/containers/b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a/hosts",
	        "LogPath": "/var/lib/docker/containers/b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a/b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a-json.log",
	        "Name": "/pause-343192",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-343192:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-343192",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b390adacb4f3f3ec104b9dcc73b2dae79973fd65fd587a082b66a7a23572b37a",
	                "LowerDir": "/var/lib/docker/overlay2/5b62cf98731e9c9fbbaebf9242d274508371b43f530d1daee79cccee16fc9915-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b62cf98731e9c9fbbaebf9242d274508371b43f530d1daee79cccee16fc9915/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b62cf98731e9c9fbbaebf9242d274508371b43f530d1daee79cccee16fc9915/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b62cf98731e9c9fbbaebf9242d274508371b43f530d1daee79cccee16fc9915/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-343192",
	                "Source": "/var/lib/docker/volumes/pause-343192/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-343192",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-343192",
	                "name.minikube.sigs.k8s.io": "pause-343192",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b36c1b5ee1d026b8ad6a8d8a633e5415b17f056b894281aa1469ed9e63e8d8b1",
	            "SandboxKey": "/var/run/docker/netns/b36c1b5ee1d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34482"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34483"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34486"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34484"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34485"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-343192": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:0b:50:d9:3e:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "af705cfd21d261f64ffae0a47851a33e50f9d449ae94c054706bbe7bdf083c91",
	                    "EndpointID": "10b573f0bfaa64dcc67f395a7011be827eae0e8759453d900c42c4da393ad2ba",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-343192",
	                        "b390adacb4f3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-343192 -n pause-343192
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-343192 -n pause-343192: exit status 2 (433.365306ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-343192 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-343192 logs -n 25: (1.478700347s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-012922 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:22 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p missing-upgrade-625347 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-625347    │ jenkins │ v1.32.0 │ 08 Nov 25 10:22 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-012922 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ delete  │ -p NoKubernetes-012922                                                                                                                   │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-012922 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ ssh     │ -p NoKubernetes-012922 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │                     │
	│ stop    │ -p NoKubernetes-012922                                                                                                                   │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-012922 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p missing-upgrade-625347 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-625347    │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:24 UTC │
	│ ssh     │ -p NoKubernetes-012922 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │                     │
	│ delete  │ -p NoKubernetes-012922                                                                                                                   │ NoKubernetes-012922       │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:23 UTC │
	│ start   │ -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-666491 │ jenkins │ v1.37.0 │ 08 Nov 25 10:23 UTC │ 08 Nov 25 10:24 UTC │
	│ delete  │ -p missing-upgrade-625347                                                                                                                │ missing-upgrade-625347    │ jenkins │ v1.37.0 │ 08 Nov 25 10:24 UTC │ 08 Nov 25 10:24 UTC │
	│ stop    │ -p kubernetes-upgrade-666491                                                                                                             │ kubernetes-upgrade-666491 │ jenkins │ v1.37.0 │ 08 Nov 25 10:24 UTC │ 08 Nov 25 10:24 UTC │
	│ start   │ -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-666491 │ jenkins │ v1.37.0 │ 08 Nov 25 10:24 UTC │                     │
	│ start   │ -p stopped-upgrade-660964 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-660964    │ jenkins │ v1.32.0 │ 08 Nov 25 10:24 UTC │ 08 Nov 25 10:25 UTC │
	│ stop    │ stopped-upgrade-660964 stop                                                                                                              │ stopped-upgrade-660964    │ jenkins │ v1.32.0 │ 08 Nov 25 10:25 UTC │ 08 Nov 25 10:25 UTC │
	│ start   │ -p stopped-upgrade-660964 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-660964    │ jenkins │ v1.37.0 │ 08 Nov 25 10:25 UTC │ 08 Nov 25 10:25 UTC │
	│ delete  │ -p stopped-upgrade-660964                                                                                                                │ stopped-upgrade-660964    │ jenkins │ v1.37.0 │ 08 Nov 25 10:25 UTC │ 08 Nov 25 10:25 UTC │
	│ start   │ -p running-upgrade-980073 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-980073    │ jenkins │ v1.32.0 │ 08 Nov 25 10:25 UTC │ 08 Nov 25 10:26 UTC │
	│ start   │ -p running-upgrade-980073 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-980073    │ jenkins │ v1.37.0 │ 08 Nov 25 10:26 UTC │ 08 Nov 25 10:26 UTC │
	│ delete  │ -p running-upgrade-980073                                                                                                                │ running-upgrade-980073    │ jenkins │ v1.37.0 │ 08 Nov 25 10:26 UTC │ 08 Nov 25 10:26 UTC │
	│ start   │ -p pause-343192 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-343192              │ jenkins │ v1.37.0 │ 08 Nov 25 10:26 UTC │ 08 Nov 25 10:27 UTC │
	│ start   │ -p pause-343192 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-343192              │ jenkins │ v1.37.0 │ 08 Nov 25 10:27 UTC │ 08 Nov 25 10:28 UTC │
	│ pause   │ -p pause-343192 --alsologtostderr -v=5                                                                                                   │ pause-343192              │ jenkins │ v1.37.0 │ 08 Nov 25 10:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:27:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:27:45.977606 1188449 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:27:45.977781 1188449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:27:45.977795 1188449 out.go:374] Setting ErrFile to fd 2...
	I1108 10:27:45.977801 1188449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:27:45.978100 1188449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:27:45.978498 1188449 out.go:368] Setting JSON to false
	I1108 10:27:45.979515 1188449 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33011,"bootTime":1762564655,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:27:45.979597 1188449 start.go:143] virtualization:  
	I1108 10:27:45.983630 1188449 out.go:179] * [pause-343192] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:27:45.986574 1188449 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:27:45.986685 1188449 notify.go:221] Checking for updates...
	I1108 10:27:45.992571 1188449 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:27:45.995507 1188449 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:27:45.998483 1188449 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:27:46.001421 1188449 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:27:46.004689 1188449 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:27:46.008569 1188449 config.go:182] Loaded profile config "pause-343192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:27:46.009183 1188449 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:27:46.045902 1188449 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:27:46.046019 1188449 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:27:46.108854 1188449 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:27:46.098725493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:27:46.108966 1188449 docker.go:319] overlay module found
	I1108 10:27:46.112168 1188449 out.go:179] * Using the docker driver based on existing profile
	I1108 10:27:46.115038 1188449 start.go:309] selected driver: docker
	I1108 10:27:46.115063 1188449 start.go:930] validating driver "docker" against &{Name:pause-343192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-343192 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:27:46.115234 1188449 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:27:46.115339 1188449 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:27:46.177826 1188449 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:27:46.167901341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:27:46.178290 1188449 cni.go:84] Creating CNI manager for ""
	I1108 10:27:46.178349 1188449 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:27:46.178433 1188449 start.go:353] cluster config:
	{Name:pause-343192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-343192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:27:46.183846 1188449 out.go:179] * Starting "pause-343192" primary control-plane node in "pause-343192" cluster
	I1108 10:27:46.186707 1188449 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:27:46.189579 1188449 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:27:46.192412 1188449 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:27:46.192483 1188449 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:27:46.192488 1188449 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:27:46.192495 1188449 cache.go:59] Caching tarball of preloaded images
	I1108 10:27:46.192584 1188449 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:27:46.192594 1188449 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:27:46.192749 1188449 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/config.json ...
	I1108 10:27:46.211100 1188449 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:27:46.211123 1188449 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:27:46.211141 1188449 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:27:46.211163 1188449 start.go:360] acquireMachinesLock for pause-343192: {Name:mk5a19317988718a71345d25975ea9a0c5d84756 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:27:46.211220 1188449 start.go:364] duration metric: took 35.494µs to acquireMachinesLock for "pause-343192"
	I1108 10:27:46.211249 1188449 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:27:46.211257 1188449 fix.go:54] fixHost starting: 
	I1108 10:27:46.211514 1188449 cli_runner.go:164] Run: docker container inspect pause-343192 --format={{.State.Status}}
	I1108 10:27:46.230116 1188449 fix.go:112] recreateIfNeeded on pause-343192: state=Running err=<nil>
	W1108 10:27:46.230143 1188449 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 10:27:46.530899 1173175 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:27:46.531346 1173175 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:27:46.531391 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 10:27:46.531444 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 10:27:46.559466 1173175 cri.go:89] found id: "8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:46.559485 1173175 cri.go:89] found id: ""
	I1108 10:27:46.559493 1173175 logs.go:282] 1 containers: [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98]
	I1108 10:27:46.559557 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:46.563238 1173175 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 10:27:46.563323 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 10:27:46.600335 1173175 cri.go:89] found id: ""
	I1108 10:27:46.600362 1173175 logs.go:282] 0 containers: []
	W1108 10:27:46.600371 1173175 logs.go:284] No container was found matching "etcd"
	I1108 10:27:46.600377 1173175 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 10:27:46.600463 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 10:27:46.639292 1173175 cri.go:89] found id: ""
	I1108 10:27:46.639318 1173175 logs.go:282] 0 containers: []
	W1108 10:27:46.639326 1173175 logs.go:284] No container was found matching "coredns"
	I1108 10:27:46.639332 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 10:27:46.639395 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 10:27:46.679383 1173175 cri.go:89] found id: "1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:46.679410 1173175 cri.go:89] found id: ""
	I1108 10:27:46.679421 1173175 logs.go:282] 1 containers: [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412]
	I1108 10:27:46.679486 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:46.683141 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 10:27:46.683222 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 10:27:46.722718 1173175 cri.go:89] found id: ""
	I1108 10:27:46.722743 1173175 logs.go:282] 0 containers: []
	W1108 10:27:46.722752 1173175 logs.go:284] No container was found matching "kube-proxy"
	I1108 10:27:46.722758 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 10:27:46.722815 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 10:27:46.753147 1173175 cri.go:89] found id: "1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:46.753172 1173175 cri.go:89] found id: ""
	I1108 10:27:46.753181 1173175 logs.go:282] 1 containers: [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c]
	I1108 10:27:46.753234 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:46.756669 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 10:27:46.756740 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 10:27:46.783459 1173175 cri.go:89] found id: ""
	I1108 10:27:46.783485 1173175 logs.go:282] 0 containers: []
	W1108 10:27:46.783494 1173175 logs.go:284] No container was found matching "kindnet"
	I1108 10:27:46.783500 1173175 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 10:27:46.783558 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 10:27:46.849797 1173175 cri.go:89] found id: ""
	I1108 10:27:46.849820 1173175 logs.go:282] 0 containers: []
	W1108 10:27:46.849828 1173175 logs.go:284] No container was found matching "storage-provisioner"
	I1108 10:27:46.849837 1173175 logs.go:123] Gathering logs for container status ...
	I1108 10:27:46.849850 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 10:27:46.900945 1173175 logs.go:123] Gathering logs for kubelet ...
	I1108 10:27:46.900970 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 10:27:47.027928 1173175 logs.go:123] Gathering logs for dmesg ...
	I1108 10:27:47.028004 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 10:27:47.047771 1173175 logs.go:123] Gathering logs for describe nodes ...
	I1108 10:27:47.047806 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 10:27:47.122201 1173175 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 10:27:47.122274 1173175 logs.go:123] Gathering logs for kube-apiserver [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98] ...
	I1108 10:27:47.122294 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:47.170318 1173175 logs.go:123] Gathering logs for kube-scheduler [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412] ...
	I1108 10:27:47.170349 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:47.264805 1173175 logs.go:123] Gathering logs for kube-controller-manager [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c] ...
	I1108 10:27:47.264833 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:47.304324 1173175 logs.go:123] Gathering logs for CRI-O ...
	I1108 10:27:47.304352 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 10:27:46.233234 1188449 out.go:252] * Updating the running docker "pause-343192" container ...
	I1108 10:27:46.233268 1188449 machine.go:94] provisionDockerMachine start ...
	I1108 10:27:46.233355 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:46.251182 1188449 main.go:143] libmachine: Using SSH client type: native
	I1108 10:27:46.251515 1188449 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34482 <nil> <nil>}
	I1108 10:27:46.251532 1188449 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:27:46.411877 1188449 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-343192
	
	I1108 10:27:46.411906 1188449 ubuntu.go:182] provisioning hostname "pause-343192"
	I1108 10:27:46.412007 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:46.430365 1188449 main.go:143] libmachine: Using SSH client type: native
	I1108 10:27:46.430669 1188449 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34482 <nil> <nil>}
	I1108 10:27:46.430686 1188449 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-343192 && echo "pause-343192" | sudo tee /etc/hostname
	I1108 10:27:46.602798 1188449 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-343192
	
	I1108 10:27:46.602869 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:46.628667 1188449 main.go:143] libmachine: Using SSH client type: native
	I1108 10:27:46.629033 1188449 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34482 <nil> <nil>}
	I1108 10:27:46.629050 1188449 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-343192' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-343192/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-343192' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:27:46.796382 1188449 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:27:46.796413 1188449 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:27:46.796498 1188449 ubuntu.go:190] setting up certificates
	I1108 10:27:46.796508 1188449 provision.go:84] configureAuth start
	I1108 10:27:46.796568 1188449 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-343192
	I1108 10:27:46.822023 1188449 provision.go:143] copyHostCerts
	I1108 10:27:46.822087 1188449 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:27:46.822097 1188449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:27:46.822173 1188449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:27:46.822267 1188449 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:27:46.822273 1188449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:27:46.822299 1188449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:27:46.822348 1188449 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:27:46.822352 1188449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:27:46.822375 1188449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:27:46.822423 1188449 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.pause-343192 san=[127.0.0.1 192.168.85.2 localhost minikube pause-343192]
	I1108 10:27:47.003469 1188449 provision.go:177] copyRemoteCerts
	I1108 10:27:47.003548 1188449 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:27:47.003610 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:47.032949 1188449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34482 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/pause-343192/id_rsa Username:docker}
	I1108 10:27:47.149230 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:27:47.184876 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:27:47.210834 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1108 10:27:47.241542 1188449 provision.go:87] duration metric: took 445.014907ms to configureAuth
	I1108 10:27:47.241567 1188449 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:27:47.241784 1188449 config.go:182] Loaded profile config "pause-343192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:27:47.241897 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:47.263211 1188449 main.go:143] libmachine: Using SSH client type: native
	I1108 10:27:47.263612 1188449 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34482 <nil> <nil>}
	I1108 10:27:47.263751 1188449 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:27:49.881560 1173175 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:27:49.882026 1173175 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:27:49.882102 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 10:27:49.882182 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 10:27:49.913451 1173175 cri.go:89] found id: "8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:49.913471 1173175 cri.go:89] found id: ""
	I1108 10:27:49.913479 1173175 logs.go:282] 1 containers: [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98]
	I1108 10:27:49.913544 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:49.917303 1173175 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 10:27:49.917383 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 10:27:49.943688 1173175 cri.go:89] found id: ""
	I1108 10:27:49.943716 1173175 logs.go:282] 0 containers: []
	W1108 10:27:49.943725 1173175 logs.go:284] No container was found matching "etcd"
	I1108 10:27:49.943731 1173175 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 10:27:49.943789 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 10:27:49.968772 1173175 cri.go:89] found id: ""
	I1108 10:27:49.968796 1173175 logs.go:282] 0 containers: []
	W1108 10:27:49.968805 1173175 logs.go:284] No container was found matching "coredns"
	I1108 10:27:49.968811 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 10:27:49.968869 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 10:27:49.995180 1173175 cri.go:89] found id: "1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:49.995202 1173175 cri.go:89] found id: ""
	I1108 10:27:49.995211 1173175 logs.go:282] 1 containers: [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412]
	I1108 10:27:49.995266 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:49.999069 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 10:27:49.999142 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 10:27:50.034370 1173175 cri.go:89] found id: ""
	I1108 10:27:50.034397 1173175 logs.go:282] 0 containers: []
	W1108 10:27:50.034406 1173175 logs.go:284] No container was found matching "kube-proxy"
	I1108 10:27:50.034413 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 10:27:50.034482 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 10:27:50.065037 1173175 cri.go:89] found id: "1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:50.065061 1173175 cri.go:89] found id: ""
	I1108 10:27:50.065070 1173175 logs.go:282] 1 containers: [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c]
	I1108 10:27:50.065126 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:50.068995 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 10:27:50.069071 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 10:27:50.097193 1173175 cri.go:89] found id: ""
	I1108 10:27:50.097221 1173175 logs.go:282] 0 containers: []
	W1108 10:27:50.097230 1173175 logs.go:284] No container was found matching "kindnet"
	I1108 10:27:50.097238 1173175 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 10:27:50.097301 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 10:27:50.124701 1173175 cri.go:89] found id: ""
	I1108 10:27:50.124726 1173175 logs.go:282] 0 containers: []
	W1108 10:27:50.124735 1173175 logs.go:284] No container was found matching "storage-provisioner"
	I1108 10:27:50.124744 1173175 logs.go:123] Gathering logs for kube-controller-manager [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c] ...
	I1108 10:27:50.124775 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:50.158775 1173175 logs.go:123] Gathering logs for CRI-O ...
	I1108 10:27:50.158803 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 10:27:50.213271 1173175 logs.go:123] Gathering logs for container status ...
	I1108 10:27:50.213307 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 10:27:50.244020 1173175 logs.go:123] Gathering logs for kubelet ...
	I1108 10:27:50.244047 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 10:27:50.359123 1173175 logs.go:123] Gathering logs for dmesg ...
	I1108 10:27:50.359159 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 10:27:50.377451 1173175 logs.go:123] Gathering logs for describe nodes ...
	I1108 10:27:50.377486 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 10:27:50.441907 1173175 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 10:27:50.441926 1173175 logs.go:123] Gathering logs for kube-apiserver [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98] ...
	I1108 10:27:50.441940 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:50.474355 1173175 logs.go:123] Gathering logs for kube-scheduler [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412] ...
	I1108 10:27:50.474387 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:53.035315 1173175 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:27:53.035749 1173175 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:27:53.035794 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 10:27:53.035851 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 10:27:53.068665 1173175 cri.go:89] found id: "8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:53.068685 1173175 cri.go:89] found id: ""
	I1108 10:27:53.068693 1173175 logs.go:282] 1 containers: [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98]
	I1108 10:27:53.068748 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:53.072689 1173175 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 10:27:53.072762 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 10:27:53.109777 1173175 cri.go:89] found id: ""
	I1108 10:27:53.109802 1173175 logs.go:282] 0 containers: []
	W1108 10:27:53.109810 1173175 logs.go:284] No container was found matching "etcd"
	I1108 10:27:53.109816 1173175 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 10:27:53.109873 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 10:27:53.149242 1173175 cri.go:89] found id: ""
	I1108 10:27:53.149266 1173175 logs.go:282] 0 containers: []
	W1108 10:27:53.149275 1173175 logs.go:284] No container was found matching "coredns"
	I1108 10:27:53.149281 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 10:27:53.149341 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 10:27:53.188188 1173175 cri.go:89] found id: "1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:53.188211 1173175 cri.go:89] found id: ""
	I1108 10:27:53.188219 1173175 logs.go:282] 1 containers: [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412]
	I1108 10:27:53.188275 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:53.192891 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 10:27:53.192967 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 10:27:53.230575 1173175 cri.go:89] found id: ""
	I1108 10:27:53.230608 1173175 logs.go:282] 0 containers: []
	W1108 10:27:53.230618 1173175 logs.go:284] No container was found matching "kube-proxy"
	I1108 10:27:53.230624 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 10:27:53.230679 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 10:27:53.262411 1173175 cri.go:89] found id: "1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:53.262438 1173175 cri.go:89] found id: ""
	I1108 10:27:53.262447 1173175 logs.go:282] 1 containers: [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c]
	I1108 10:27:53.262502 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:53.266295 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 10:27:53.266361 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 10:27:53.299542 1173175 cri.go:89] found id: ""
	I1108 10:27:53.299583 1173175 logs.go:282] 0 containers: []
	W1108 10:27:53.299592 1173175 logs.go:284] No container was found matching "kindnet"
	I1108 10:27:53.299598 1173175 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 10:27:53.299666 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 10:27:53.334416 1173175 cri.go:89] found id: ""
	I1108 10:27:53.334444 1173175 logs.go:282] 0 containers: []
	W1108 10:27:53.334452 1173175 logs.go:284] No container was found matching "storage-provisioner"
	I1108 10:27:53.334462 1173175 logs.go:123] Gathering logs for kubelet ...
	I1108 10:27:53.334473 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 10:27:53.466663 1173175 logs.go:123] Gathering logs for dmesg ...
	I1108 10:27:53.466699 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 10:27:53.487147 1173175 logs.go:123] Gathering logs for describe nodes ...
	I1108 10:27:53.487178 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 10:27:53.589677 1173175 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 10:27:53.589696 1173175 logs.go:123] Gathering logs for kube-apiserver [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98] ...
	I1108 10:27:53.589708 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:53.631372 1173175 logs.go:123] Gathering logs for kube-scheduler [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412] ...
	I1108 10:27:53.631442 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:53.701768 1173175 logs.go:123] Gathering logs for kube-controller-manager [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c] ...
	I1108 10:27:53.701847 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:53.740220 1173175 logs.go:123] Gathering logs for CRI-O ...
	I1108 10:27:53.740248 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 10:27:53.818012 1173175 logs.go:123] Gathering logs for container status ...
	I1108 10:27:53.818090 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 10:27:52.634362 1188449 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:27:52.634384 1188449 machine.go:97] duration metric: took 6.401107228s to provisionDockerMachine
	I1108 10:27:52.634396 1188449 start.go:293] postStartSetup for "pause-343192" (driver="docker")
	I1108 10:27:52.634406 1188449 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:27:52.634471 1188449 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:27:52.634516 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:52.651668 1188449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34482 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/pause-343192/id_rsa Username:docker}
	I1108 10:27:52.756078 1188449 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:27:52.759215 1188449 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:27:52.759248 1188449 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:27:52.759260 1188449 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:27:52.759317 1188449 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:27:52.759399 1188449 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:27:52.759500 1188449 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:27:52.766671 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:27:52.784137 1188449 start.go:296] duration metric: took 149.725049ms for postStartSetup
	I1108 10:27:52.784257 1188449 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:27:52.784325 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:52.800907 1188449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34482 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/pause-343192/id_rsa Username:docker}
	I1108 10:27:52.901891 1188449 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:27:52.906841 1188449 fix.go:56] duration metric: took 6.695575297s for fixHost
	I1108 10:27:52.906867 1188449 start.go:83] releasing machines lock for "pause-343192", held for 6.695628383s
	I1108 10:27:52.906960 1188449 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-343192
	I1108 10:27:52.924079 1188449 ssh_runner.go:195] Run: cat /version.json
	I1108 10:27:52.924113 1188449 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:27:52.924172 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:52.924187 1188449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-343192
	I1108 10:27:52.946527 1188449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34482 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/pause-343192/id_rsa Username:docker}
	I1108 10:27:52.946554 1188449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34482 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/pause-343192/id_rsa Username:docker}
	I1108 10:27:53.150635 1188449 ssh_runner.go:195] Run: systemctl --version
	I1108 10:27:53.158177 1188449 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:27:53.217272 1188449 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:27:53.227331 1188449 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:27:53.227449 1188449 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:27:53.236985 1188449 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:27:53.237059 1188449 start.go:496] detecting cgroup driver to use...
	I1108 10:27:53.237104 1188449 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:27:53.237175 1188449 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:27:53.252498 1188449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:27:53.269822 1188449 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:27:53.269890 1188449 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:27:53.286414 1188449 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:27:53.300655 1188449 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:27:53.487645 1188449 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:27:53.659250 1188449 docker.go:234] disabling docker service ...
	I1108 10:27:53.659322 1188449 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:27:53.675341 1188449 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:27:53.689349 1188449 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:27:53.859426 1188449 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:27:54.015722 1188449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:27:54.031262 1188449 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:27:54.045873 1188449 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:27:54.045951 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.055150 1188449 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:27:54.055225 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.065284 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.075367 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.085090 1188449 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:27:54.094164 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.103419 1188449 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.112320 1188449 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:27:54.121596 1188449 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:27:54.129478 1188449 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:27:54.141868 1188449 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:27:54.274189 1188449 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:27:54.462766 1188449 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:27:54.462835 1188449 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:27:54.466740 1188449 start.go:564] Will wait 60s for crictl version
	I1108 10:27:54.466799 1188449 ssh_runner.go:195] Run: which crictl
	I1108 10:27:54.470252 1188449 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:27:54.503589 1188449 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:27:54.503736 1188449 ssh_runner.go:195] Run: crio --version
	I1108 10:27:54.532552 1188449 ssh_runner.go:195] Run: crio --version
	I1108 10:27:54.563110 1188449 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:27:54.566073 1188449 cli_runner.go:164] Run: docker network inspect pause-343192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:27:54.581612 1188449 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:27:54.585329 1188449 kubeadm.go:884] updating cluster {Name:pause-343192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-343192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:27:54.585473 1188449 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:27:54.585524 1188449 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:27:54.615964 1188449 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:27:54.615989 1188449 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:27:54.616043 1188449 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:27:54.644969 1188449 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:27:54.644992 1188449 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:27:54.645001 1188449 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 10:27:54.645105 1188449 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-343192 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-343192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:27:54.645181 1188449 ssh_runner.go:195] Run: crio config
	I1108 10:27:54.697689 1188449 cni.go:84] Creating CNI manager for ""
	I1108 10:27:54.697755 1188449 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:27:54.697778 1188449 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:27:54.697803 1188449 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-343192 NodeName:pause-343192 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:27:54.697935 1188449 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-343192"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:27:54.698008 1188449 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:27:54.705607 1188449 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:27:54.705676 1188449 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:27:54.712963 1188449 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1108 10:27:54.727187 1188449 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:27:54.740358 1188449 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1108 10:27:54.752993 1188449 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:27:54.756429 1188449 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:27:54.892269 1188449 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:27:54.905287 1188449 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192 for IP: 192.168.85.2
	I1108 10:27:54.905310 1188449 certs.go:195] generating shared ca certs ...
	I1108 10:27:54.905327 1188449 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:27:54.905540 1188449 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:27:54.905615 1188449 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:27:54.905629 1188449 certs.go:257] generating profile certs ...
	I1108 10:27:54.905732 1188449 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/client.key
	I1108 10:27:54.905807 1188449 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/apiserver.key.fbeb1480
	I1108 10:27:54.905859 1188449 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/proxy-client.key
	I1108 10:27:54.905977 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:27:54.906011 1188449 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:27:54.906024 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:27:54.906051 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:27:54.906078 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:27:54.906134 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:27:54.906180 1188449 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:27:54.906819 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:27:54.927058 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:27:54.947939 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:27:54.971559 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:27:54.993926 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 10:27:55.043413 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:27:55.067881 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:27:55.116241 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 10:27:55.174335 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:27:55.218452 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:27:55.274877 1188449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:27:55.299004 1188449 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:27:55.315946 1188449 ssh_runner.go:195] Run: openssl version
	I1108 10:27:55.323502 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:27:55.333787 1188449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:27:55.338386 1188449 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:27:55.338508 1188449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:27:55.396385 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:27:55.406600 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:27:55.424523 1188449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:27:55.430030 1188449 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:27:55.430149 1188449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:27:55.497659 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:27:55.508951 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:27:55.518639 1188449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:27:55.522586 1188449 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:27:55.522649 1188449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:27:55.574982 1188449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:27:55.583657 1188449 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:27:55.587683 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:27:55.629677 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:27:55.671489 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:27:55.723283 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:27:55.773098 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:27:55.814132 1188449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:27:55.857624 1188449 kubeadm.go:401] StartCluster: {Name:pause-343192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-343192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:27:55.857741 1188449 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:27:55.857811 1188449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:27:55.888735 1188449 cri.go:89] found id: "0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f"
	I1108 10:27:55.888761 1188449 cri.go:89] found id: "f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981"
	I1108 10:27:55.888766 1188449 cri.go:89] found id: "cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb"
	I1108 10:27:55.888769 1188449 cri.go:89] found id: "d7a533806da9a332f1645c3360d5c4237a84729469ae7cb42a33daa107441f86"
	I1108 10:27:55.888774 1188449 cri.go:89] found id: "a2e998f95e3dabd458d90198ae4130a56a78b9685b3e0f821b670a31300781b6"
	I1108 10:27:55.888778 1188449 cri.go:89] found id: "808a055bd254c0bbbee4c3c751830708801f4ced02a2c5deb329197a434cd541"
	I1108 10:27:55.888782 1188449 cri.go:89] found id: "7036025861b31b3ce32c7deda2244e7cb402d4a8ef261e6ea3f8a57bb78fce01"
	I1108 10:27:55.888785 1188449 cri.go:89] found id: "1d28edcd8cca7648e1bc0b2fb042df7c5b1f90debfa5083af69296a4afa052d1"
	I1108 10:27:55.888788 1188449 cri.go:89] found id: "cbed26c9cc82d142d3d895dc7635d0efb73e033cb99b08450139b3c5de56c054"
	I1108 10:27:55.888796 1188449 cri.go:89] found id: "a327dc75a2da5df572b9729b0560d0810a03921afea0a1ea766f4032377a4d50"
	I1108 10:27:55.888802 1188449 cri.go:89] found id: "e64d76a590f592ad5123ea146cba17cee655e4c302e7d2c00d65f628678c8146"
	I1108 10:27:55.888806 1188449 cri.go:89] found id: "6cf1df7c69fa46c783c4d0d0ed7275b2f7575903b38be95723c5fadb80a5adb2"
	I1108 10:27:55.888809 1188449 cri.go:89] found id: "7a08c37ef37992bde0d0bd0f71fdddbca47883b01dd90e96da703efd35f23fd8"
	I1108 10:27:55.888812 1188449 cri.go:89] found id: "4c21fbaf9d079fb5c4cbd03ca8e0149295b10f764ae1c6826063a0516b80ba46"
	I1108 10:27:55.888816 1188449 cri.go:89] found id: ""
	I1108 10:27:55.888866 1188449 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:27:55.901846 1188449 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:27:55Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:27:55.901917 1188449 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:27:55.917033 1188449 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:27:55.917053 1188449 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:27:55.917105 1188449 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:27:55.928613 1188449 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:27:55.929237 1188449 kubeconfig.go:125] found "pause-343192" server: "https://192.168.85.2:8443"
	I1108 10:27:55.930028 1188449 kapi.go:59] client config for pause-343192: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/client.crt", KeyFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/client.key", CAFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21275c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 10:27:55.930507 1188449 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1108 10:27:55.930527 1188449 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1108 10:27:55.930533 1188449 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1108 10:27:55.930538 1188449 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1108 10:27:55.930545 1188449 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1108 10:27:55.930852 1188449 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:27:55.941642 1188449 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:27:55.941677 1188449 kubeadm.go:602] duration metric: took 24.618448ms to restartPrimaryControlPlane
	I1108 10:27:55.941686 1188449 kubeadm.go:403] duration metric: took 84.07357ms to StartCluster
	I1108 10:27:55.941703 1188449 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:27:55.941766 1188449 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:27:55.942627 1188449 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:27:55.942833 1188449 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:27:55.943161 1188449 config.go:182] Loaded profile config "pause-343192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:27:55.943207 1188449 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:27:55.947871 1188449 out.go:179] * Enabled addons: 
	I1108 10:27:55.947960 1188449 out.go:179] * Verifying Kubernetes components...
	I1108 10:27:55.950764 1188449 addons.go:515] duration metric: took 7.55534ms for enable addons: enabled=[]
	I1108 10:27:55.950848 1188449 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:27:56.382339 1173175 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:27:56.382733 1173175 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1108 10:27:56.382784 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 10:27:56.382841 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 10:27:56.425929 1173175 cri.go:89] found id: "8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:56.425951 1173175 cri.go:89] found id: ""
	I1108 10:27:56.425959 1173175 logs.go:282] 1 containers: [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98]
	I1108 10:27:56.426029 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:56.430034 1173175 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 10:27:56.430105 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 10:27:56.481528 1173175 cri.go:89] found id: ""
	I1108 10:27:56.481555 1173175 logs.go:282] 0 containers: []
	W1108 10:27:56.481564 1173175 logs.go:284] No container was found matching "etcd"
	I1108 10:27:56.481569 1173175 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 10:27:56.481629 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 10:27:56.528643 1173175 cri.go:89] found id: ""
	I1108 10:27:56.528671 1173175 logs.go:282] 0 containers: []
	W1108 10:27:56.528695 1173175 logs.go:284] No container was found matching "coredns"
	I1108 10:27:56.528702 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 10:27:56.528777 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 10:27:56.568272 1173175 cri.go:89] found id: "1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:56.568297 1173175 cri.go:89] found id: ""
	I1108 10:27:56.568306 1173175 logs.go:282] 1 containers: [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412]
	I1108 10:27:56.568360 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:56.572074 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 10:27:56.572152 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 10:27:56.612394 1173175 cri.go:89] found id: ""
	I1108 10:27:56.612414 1173175 logs.go:282] 0 containers: []
	W1108 10:27:56.612423 1173175 logs.go:284] No container was found matching "kube-proxy"
	I1108 10:27:56.612430 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 10:27:56.612528 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 10:27:56.663151 1173175 cri.go:89] found id: "1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:56.663172 1173175 cri.go:89] found id: ""
	I1108 10:27:56.663183 1173175 logs.go:282] 1 containers: [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c]
	I1108 10:27:56.663249 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:27:56.672762 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 10:27:56.672849 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 10:27:56.717759 1173175 cri.go:89] found id: ""
	I1108 10:27:56.717791 1173175 logs.go:282] 0 containers: []
	W1108 10:27:56.717799 1173175 logs.go:284] No container was found matching "kindnet"
	I1108 10:27:56.717806 1173175 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 10:27:56.717875 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 10:27:56.761256 1173175 cri.go:89] found id: ""
	I1108 10:27:56.761289 1173175 logs.go:282] 0 containers: []
	W1108 10:27:56.761298 1173175 logs.go:284] No container was found matching "storage-provisioner"
	I1108 10:27:56.761308 1173175 logs.go:123] Gathering logs for dmesg ...
	I1108 10:27:56.761319 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 10:27:56.783740 1173175 logs.go:123] Gathering logs for describe nodes ...
	I1108 10:27:56.783768 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 10:27:56.887768 1173175 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 10:27:56.887791 1173175 logs.go:123] Gathering logs for kube-apiserver [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98] ...
	I1108 10:27:56.887803 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:27:56.931725 1173175 logs.go:123] Gathering logs for kube-scheduler [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412] ...
	I1108 10:27:56.931754 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:27:57.050294 1173175 logs.go:123] Gathering logs for kube-controller-manager [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c] ...
	I1108 10:27:57.050338 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:27:57.086326 1173175 logs.go:123] Gathering logs for CRI-O ...
	I1108 10:27:57.086354 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 10:27:57.160549 1173175 logs.go:123] Gathering logs for container status ...
	I1108 10:27:57.160587 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 10:27:57.208107 1173175 logs.go:123] Gathering logs for kubelet ...
	I1108 10:27:57.208136 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 10:27:56.185150 1188449 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:27:56.201654 1188449 node_ready.go:35] waiting up to 6m0s for node "pause-343192" to be "Ready" ...
	I1108 10:28:00.900871 1188449 node_ready.go:49] node "pause-343192" is "Ready"
	I1108 10:28:00.900899 1188449 node_ready.go:38] duration metric: took 4.699164239s for node "pause-343192" to be "Ready" ...
	I1108 10:28:00.900913 1188449 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:28:00.900974 1188449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:28:00.917793 1188449 api_server.go:72] duration metric: took 4.974922945s to wait for apiserver process to appear ...
	I1108 10:28:00.917817 1188449 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:28:00.917837 1188449 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:28:00.966846 1188449 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:28:00.966942 1188449 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:27:59.863541 1173175 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:28:01.418483 1188449 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:28:01.427808 1188449 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:28:01.427838 1188449 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:28:01.918462 1188449 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:28:01.927653 1188449 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:28:01.928835 1188449 api_server.go:141] control plane version: v1.34.1
	I1108 10:28:01.928861 1188449 api_server.go:131] duration metric: took 1.011036679s to wait for apiserver health ...
	I1108 10:28:01.928871 1188449 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:28:01.933381 1188449 system_pods.go:59] 7 kube-system pods found
	I1108 10:28:01.933423 1188449 system_pods.go:61] "coredns-66bc5c9577-z4htg" [ccbca0f1-a4f6-4bdb-91f4-b4eb718ee497] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:28:01.933432 1188449 system_pods.go:61] "etcd-pause-343192" [e9dd9e24-4928-4921-baba-1e43583dec44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:28:01.933439 1188449 system_pods.go:61] "kindnet-5dl8w" [e6e7ac85-7324-4cb4-955e-95b1709547a2] Running
	I1108 10:28:01.933447 1188449 system_pods.go:61] "kube-apiserver-pause-343192" [aa6ba0e4-9923-46ed-b85f-5d6ba133f16a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:28:01.933456 1188449 system_pods.go:61] "kube-controller-manager-pause-343192" [0a07fd03-853c-4136-b7e1-a7331811ab39] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:28:01.933466 1188449 system_pods.go:61] "kube-proxy-774lt" [840433f1-4620-41e8-80eb-4190421a0b49] Running
	I1108 10:28:01.933475 1188449 system_pods.go:61] "kube-scheduler-pause-343192" [dc53200d-9f68-4dba-aa09-b0e0839beae5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:28:01.933483 1188449 system_pods.go:74] duration metric: took 4.604999ms to wait for pod list to return data ...
	I1108 10:28:01.933496 1188449 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:28:01.936076 1188449 default_sa.go:45] found service account: "default"
	I1108 10:28:01.936100 1188449 default_sa.go:55] duration metric: took 2.597213ms for default service account to be created ...
	I1108 10:28:01.936110 1188449 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:28:01.938932 1188449 system_pods.go:86] 7 kube-system pods found
	I1108 10:28:01.938964 1188449 system_pods.go:89] "coredns-66bc5c9577-z4htg" [ccbca0f1-a4f6-4bdb-91f4-b4eb718ee497] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:28:01.938994 1188449 system_pods.go:89] "etcd-pause-343192" [e9dd9e24-4928-4921-baba-1e43583dec44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:28:01.939010 1188449 system_pods.go:89] "kindnet-5dl8w" [e6e7ac85-7324-4cb4-955e-95b1709547a2] Running
	I1108 10:28:01.939019 1188449 system_pods.go:89] "kube-apiserver-pause-343192" [aa6ba0e4-9923-46ed-b85f-5d6ba133f16a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:28:01.939034 1188449 system_pods.go:89] "kube-controller-manager-pause-343192" [0a07fd03-853c-4136-b7e1-a7331811ab39] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:28:01.939039 1188449 system_pods.go:89] "kube-proxy-774lt" [840433f1-4620-41e8-80eb-4190421a0b49] Running
	I1108 10:28:01.939049 1188449 system_pods.go:89] "kube-scheduler-pause-343192" [dc53200d-9f68-4dba-aa09-b0e0839beae5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:28:01.939087 1188449 system_pods.go:126] duration metric: took 2.960098ms to wait for k8s-apps to be running ...
	I1108 10:28:01.939104 1188449 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:28:01.939170 1188449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:28:01.954623 1188449 system_svc.go:56] duration metric: took 15.505853ms WaitForService to wait for kubelet
	I1108 10:28:01.954654 1188449 kubeadm.go:587] duration metric: took 6.011788708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:28:01.954675 1188449 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:28:01.958495 1188449 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:28:01.958533 1188449 node_conditions.go:123] node cpu capacity is 2
	I1108 10:28:01.958547 1188449 node_conditions.go:105] duration metric: took 3.8657ms to run NodePressure ...
	I1108 10:28:01.958561 1188449 start.go:242] waiting for startup goroutines ...
	I1108 10:28:01.958569 1188449 start.go:247] waiting for cluster config update ...
	I1108 10:28:01.958576 1188449 start.go:256] writing updated cluster config ...
	I1108 10:28:01.958962 1188449 ssh_runner.go:195] Run: rm -f paused
	I1108 10:28:01.963567 1188449 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:28:01.964266 1188449 kapi.go:59] client config for pause-343192: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/client.crt", KeyFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/pause-343192/client.key", CAFile:"/home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21275c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 10:28:01.967689 1188449 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z4htg" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:28:03.972839 1188449 pod_ready.go:104] pod "coredns-66bc5c9577-z4htg" is not "Ready", error: <nil>
	W1108 10:28:05.973632 1188449 pod_ready.go:104] pod "coredns-66bc5c9577-z4htg" is not "Ready", error: <nil>
	I1108 10:28:04.863861 1173175 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 10:28:04.863921 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 10:28:04.863987 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 10:28:04.890675 1173175 cri.go:89] found id: "f6b0773c68d746faa2430b80e04ecdde7ed1220310045bd0a7f4cafa3b838acf"
	I1108 10:28:04.890698 1173175 cri.go:89] found id: "8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:28:04.890703 1173175 cri.go:89] found id: ""
	I1108 10:28:04.890710 1173175 logs.go:282] 2 containers: [f6b0773c68d746faa2430b80e04ecdde7ed1220310045bd0a7f4cafa3b838acf 8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98]
	I1108 10:28:04.890770 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:28:04.894537 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:28:04.898476 1173175 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 10:28:04.898550 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 10:28:04.924379 1173175 cri.go:89] found id: ""
	I1108 10:28:04.924406 1173175 logs.go:282] 0 containers: []
	W1108 10:28:04.924415 1173175 logs.go:284] No container was found matching "etcd"
	I1108 10:28:04.924420 1173175 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 10:28:04.924524 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 10:28:04.952279 1173175 cri.go:89] found id: ""
	I1108 10:28:04.952306 1173175 logs.go:282] 0 containers: []
	W1108 10:28:04.952315 1173175 logs.go:284] No container was found matching "coredns"
	I1108 10:28:04.952321 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 10:28:04.952388 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 10:28:04.982154 1173175 cri.go:89] found id: "1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:28:04.982179 1173175 cri.go:89] found id: ""
	I1108 10:28:04.982188 1173175 logs.go:282] 1 containers: [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412]
	I1108 10:28:04.982249 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:28:04.986005 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 10:28:04.986078 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 10:28:05.014860 1173175 cri.go:89] found id: ""
	I1108 10:28:05.014885 1173175 logs.go:282] 0 containers: []
	W1108 10:28:05.014893 1173175 logs.go:284] No container was found matching "kube-proxy"
	I1108 10:28:05.014899 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 10:28:05.014961 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 10:28:05.044185 1173175 cri.go:89] found id: "1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:28:05.044207 1173175 cri.go:89] found id: ""
	I1108 10:28:05.044216 1173175 logs.go:282] 1 containers: [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c]
	I1108 10:28:05.044271 1173175 ssh_runner.go:195] Run: which crictl
	I1108 10:28:05.047888 1173175 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 10:28:05.047958 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 10:28:05.077825 1173175 cri.go:89] found id: ""
	I1108 10:28:05.077849 1173175 logs.go:282] 0 containers: []
	W1108 10:28:05.077858 1173175 logs.go:284] No container was found matching "kindnet"
	I1108 10:28:05.077864 1173175 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 10:28:05.077921 1173175 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 10:28:05.105475 1173175 cri.go:89] found id: ""
	I1108 10:28:05.105500 1173175 logs.go:282] 0 containers: []
	W1108 10:28:05.105522 1173175 logs.go:284] No container was found matching "storage-provisioner"
	I1108 10:28:05.105559 1173175 logs.go:123] Gathering logs for dmesg ...
	I1108 10:28:05.105578 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 10:28:05.124215 1173175 logs.go:123] Gathering logs for kube-apiserver [f6b0773c68d746faa2430b80e04ecdde7ed1220310045bd0a7f4cafa3b838acf] ...
	I1108 10:28:05.124247 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6b0773c68d746faa2430b80e04ecdde7ed1220310045bd0a7f4cafa3b838acf"
	I1108 10:28:05.160267 1173175 logs.go:123] Gathering logs for CRI-O ...
	I1108 10:28:05.160299 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 10:28:05.221433 1173175 logs.go:123] Gathering logs for container status ...
	I1108 10:28:05.221467 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 10:28:05.252220 1173175 logs.go:123] Gathering logs for describe nodes ...
	I1108 10:28:05.252251 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1108 10:28:07.473625 1188449 pod_ready.go:94] pod "coredns-66bc5c9577-z4htg" is "Ready"
	I1108 10:28:07.473656 1188449 pod_ready.go:86] duration metric: took 5.505940122s for pod "coredns-66bc5c9577-z4htg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:07.476262 1188449 pod_ready.go:83] waiting for pod "etcd-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:28:09.482983 1188449 pod_ready.go:104] pod "etcd-pause-343192" is not "Ready", error: <nil>
	W1108 10:28:11.981454 1188449 pod_ready.go:104] pod "etcd-pause-343192" is not "Ready", error: <nil>
	W1108 10:28:13.982098 1188449 pod_ready.go:104] pod "etcd-pause-343192" is not "Ready", error: <nil>
	I1108 10:28:14.981696 1188449 pod_ready.go:94] pod "etcd-pause-343192" is "Ready"
	I1108 10:28:14.981727 1188449 pod_ready.go:86] duration metric: took 7.505441455s for pod "etcd-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:14.984181 1188449 pod_ready.go:83] waiting for pod "kube-apiserver-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:14.988800 1188449 pod_ready.go:94] pod "kube-apiserver-pause-343192" is "Ready"
	I1108 10:28:14.988831 1188449 pod_ready.go:86] duration metric: took 4.624354ms for pod "kube-apiserver-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:14.991218 1188449 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:14.995325 1188449 pod_ready.go:94] pod "kube-controller-manager-pause-343192" is "Ready"
	I1108 10:28:14.995348 1188449 pod_ready.go:86] duration metric: took 4.104307ms for pod "kube-controller-manager-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:14.997650 1188449 pod_ready.go:83] waiting for pod "kube-proxy-774lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:15.180095 1188449 pod_ready.go:94] pod "kube-proxy-774lt" is "Ready"
	I1108 10:28:15.180119 1188449 pod_ready.go:86] duration metric: took 182.444472ms for pod "kube-proxy-774lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:15.380304 1188449 pod_ready.go:83] waiting for pod "kube-scheduler-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:15.780089 1188449 pod_ready.go:94] pod "kube-scheduler-pause-343192" is "Ready"
	I1108 10:28:15.780115 1188449 pod_ready.go:86] duration metric: took 399.782543ms for pod "kube-scheduler-pause-343192" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:28:15.780127 1188449 pod_ready.go:40] duration metric: took 13.816483238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:28:15.841245 1188449 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:28:15.844379 1188449 out.go:179] * Done! kubectl is now configured to use "pause-343192" cluster and "default" namespace by default
	I1108 10:28:15.319747 1173175 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.067475598s)
	W1108 10:28:15.319784 1173175 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1108 10:28:15.319795 1173175 logs.go:123] Gathering logs for kube-apiserver [8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98] ...
	I1108 10:28:15.319806 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8b587e7ba732a63922855ead0303096d727a37009e9c4c1c9b88ac4c387afe98"
	I1108 10:28:15.353693 1173175 logs.go:123] Gathering logs for kube-scheduler [1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412] ...
	I1108 10:28:15.353765 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1105dba72a787880e3e218fe4f5f216ae65fd9096f5d4c791948c7edf613a412"
	I1108 10:28:15.418753 1173175 logs.go:123] Gathering logs for kube-controller-manager [1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c] ...
	I1108 10:28:15.418790 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e32fef80dff1ecc13a5737490230d243a60c2c25af49bf6a959c3f7cbdb918c"
	I1108 10:28:15.446666 1173175 logs.go:123] Gathering logs for kubelet ...
	I1108 10:28:15.446691 1173175 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 10:28:18.064503 1173175 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	
	
	==> CRI-O <==
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.114989858Z" level=info msg="Started container" PID=2244 containerID=a2e998f95e3dabd458d90198ae4130a56a78b9685b3e0f821b670a31300781b6 description=kube-system/etcd-pause-343192/etcd id=ae80516e-7b2b-4d51-a744-e7df56497b0b name=/runtime.v1.RuntimeService/StartContainer sandboxID=933868f3bd04b1fe383e4005366c51e0ba5af4a2beede9e23bd89c36f1ad0a1c
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.19141109Z" level=info msg="Created container f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981: kube-system/kube-controller-manager-pause-343192/kube-controller-manager" id=8fa6a36c-9113-44ca-a6f9-a28dd90bd418 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.192477434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.193087046Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.199979672Z" level=info msg="Starting container: f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981" id=1af01398-4c16-477e-a55c-8f3be6b83624 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.200243983Z" level=info msg="Started container" PID=2254 containerID=d7a533806da9a332f1645c3360d5c4237a84729469ae7cb42a33daa107441f86 description=kube-system/kube-scheduler-pause-343192/kube-scheduler id=e3195917-05e4-411b-a5b4-bff55e638640 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5de00ebc1a698b723db5732db4979d304e64f58f318310c8d3ddabc8e5571ea6
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.212270476Z" level=info msg="Started container" PID=2290 containerID=f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981 description=kube-system/kube-controller-manager-pause-343192/kube-controller-manager id=1af01398-4c16-477e-a55c-8f3be6b83624 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71da58ac41e9e6288ff3e71252c287fffc70241759981d7785f048ddee3efb5d
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.222137912Z" level=info msg="Created container cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb: kube-system/kube-proxy-774lt/kube-proxy" id=a14e5092-583d-4fd1-b8fa-1e491082bb5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.226717319Z" level=info msg="Starting container: cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb" id=76a8a652-f441-48d2-abba-4f6c80048c5a name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.240549569Z" level=info msg="Started container" PID=2281 containerID=cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb description=kube-system/kube-proxy-774lt/kube-proxy id=76a8a652-f441-48d2-abba-4f6c80048c5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=c68d6f30b5413edf011b9619cd4b4670850fad852e914baf85e06aac5a6b4dba
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.259557243Z" level=info msg="Created container 0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f: kube-system/kube-apiserver-pause-343192/kube-apiserver" id=6cf01fea-f2f2-4dc6-91a5-6eae5cbf4e44 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.260288911Z" level=info msg="Starting container: 0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f" id=258d5b4b-76cc-41f0-9a06-7d5c8763dca1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:27:55 pause-343192 crio[2078]: time="2025-11-08T10:27:55.262525931Z" level=info msg="Started container" PID=2329 containerID=0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f description=kube-system/kube-apiserver-pause-343192/kube-apiserver id=258d5b4b-76cc-41f0-9a06-7d5c8763dca1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8734a217dcf0f44f1976df5c61aede2dca74686bece4a74636e39bb2560ab553
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.419850348Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.423823827Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.424013876Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.424066436Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.42734988Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.427383913Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.427414033Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.430824053Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.430858037Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.430880715Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.43387241Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:28:05 pause-343192 crio[2078]: time="2025-11-08T10:28:05.433903605Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0295e0bdcf0cd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   26 seconds ago       Running             kube-apiserver            1                   8734a217dcf0f       kube-apiserver-pause-343192            kube-system
	f4c13d129af56       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   26 seconds ago       Running             kube-controller-manager   1                   71da58ac41e9e       kube-controller-manager-pause-343192   kube-system
	cf7214c160cf2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   26 seconds ago       Running             kube-proxy                1                   c68d6f30b5413       kube-proxy-774lt                       kube-system
	d7a533806da9a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   26 seconds ago       Running             kube-scheduler            1                   5de00ebc1a698       kube-scheduler-pause-343192            kube-system
	a2e998f95e3da       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   26 seconds ago       Running             etcd                      1                   933868f3bd04b       etcd-pause-343192                      kube-system
	808a055bd254c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   26 seconds ago       Running             kindnet-cni               1                   e04012ef6226c       kindnet-5dl8w                          kube-system
	7036025861b31       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   26 seconds ago       Running             coredns                   1                   5b50f6840c84a       coredns-66bc5c9577-z4htg               kube-system
	1d28edcd8cca7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   5b50f6840c84a       coredns-66bc5c9577-z4htg               kube-system
	cbed26c9cc82d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   c68d6f30b5413       kube-proxy-774lt                       kube-system
	a327dc75a2da5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   e04012ef6226c       kindnet-5dl8w                          kube-system
	e64d76a590f59       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   933868f3bd04b       etcd-pause-343192                      kube-system
	6cf1df7c69fa4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   5de00ebc1a698       kube-scheduler-pause-343192            kube-system
	7a08c37ef3799       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   71da58ac41e9e       kube-controller-manager-pause-343192   kube-system
	4c21fbaf9d079       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   8734a217dcf0f       kube-apiserver-pause-343192            kube-system
	
	
	==> coredns [1d28edcd8cca7648e1bc0b2fb042df7c5b1f90debfa5083af69296a4afa052d1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39444 - 21665 "HINFO IN 3237423409004509854.6412274812491590403. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019736976s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7036025861b31b3ce32c7deda2244e7cb402d4a8ef261e6ea3f8a57bb78fce01] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43838 - 48464 "HINFO IN 807073959000323933.6825489918620195988. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.039072833s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-343192
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-343192
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=pause-343192
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_26_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:26:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-343192
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:28:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:28:11 +0000   Sat, 08 Nov 2025 10:26:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:28:11 +0000   Sat, 08 Nov 2025 10:26:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:28:11 +0000   Sat, 08 Nov 2025 10:26:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:28:11 +0000   Sat, 08 Nov 2025 10:27:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-343192
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                b0412103-dbad-4614-89ab-45b015153528
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-z4htg                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-343192                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         85s
	  kube-system                 kindnet-5dl8w                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-pause-343192             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-343192    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-774lt                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-343192             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 78s   kube-proxy       
	  Normal   Starting                 20s   kube-proxy       
	  Normal   Starting                 85s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 85s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s   kubelet          Node pause-343192 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s   kubelet          Node pause-343192 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s   kubelet          Node pause-343192 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s   node-controller  Node pause-343192 event: Registered Node pause-343192 in Controller
	  Normal   NodeReady                38s   kubelet          Node pause-343192 status is now: NodeReady
	  Normal   RegisteredNode           18s   node-controller  Node pause-343192 event: Registered Node pause-343192 in Controller
	
	
	==> dmesg <==
	[Nov 8 10:03] overlayfs: idmapped layers are currently not supported
	[  +3.322852] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:04] overlayfs: idmapped layers are currently not supported
	[ +18.943896] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:05] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:09] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[ +18.424643] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a2e998f95e3dabd458d90198ae4130a56a78b9685b3e0f821b670a31300781b6] <==
	{"level":"warn","ts":"2025-11-08T10:27:57.818476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.849149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.869470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.898206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.919791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.975880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:57.999387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.029247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.050299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.079074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.124105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.162183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.197714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.251436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.277611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.327996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.376723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.421306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.450785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.498853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.552314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.613176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.655606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.688649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:27:58.890276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48792","server-name":"","error":"EOF"}
	
	
	==> etcd [e64d76a590f592ad5123ea146cba17cee655e4c302e7d2c00d65f628678c8146] <==
	{"level":"warn","ts":"2025-11-08T10:26:53.027945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.045825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.063170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.114398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.132301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.160013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:26:53.284668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45322","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T10:27:47.466363Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-08T10:27:47.466435Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-343192","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-08T10:27:47.466531Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T10:27:47.609617Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T10:27:47.609699Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T10:27:47.609722Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-08T10:27:47.609801Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-08T10:27:47.609843Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T10:27:47.609880Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T10:27:47.609891Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T10:27:47.609861Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-08T10:27:47.609929Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T10:27:47.609959Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T10:27:47.609967Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T10:27:47.613303Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-08T10:27:47.613383Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T10:27:47.613411Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:27:47.613429Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-343192","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 10:28:21 up  9:10,  0 user,  load average: 1.88, 2.70, 2.39
	Linux pause-343192 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [808a055bd254c0bbbee4c3c751830708801f4ced02a2c5deb329197a434cd541] <==
	I1108 10:27:55.221536       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:27:55.221730       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:27:55.221853       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:27:55.221870       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:27:55.221883       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:27:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:27:55.416965       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:27:55.425732       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:27:55.425849       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:27:55.426644       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 10:28:01.026723       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:28:01.026828       1 metrics.go:72] Registering metrics
	I1108 10:28:01.026941       1 controller.go:711] "Syncing nftables rules"
	I1108 10:28:05.419458       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:28:05.419519       1 main.go:301] handling current node
	I1108 10:28:15.416893       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:28:15.417014       1 main.go:301] handling current node
	
	
	==> kindnet [a327dc75a2da5df572b9729b0560d0810a03921afea0a1ea766f4032377a4d50] <==
	I1108 10:27:02.610240       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:27:02.610599       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:27:02.610754       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:27:02.610826       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:27:02.610860       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:27:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:27:02.809922       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:27:02.809951       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:27:02.810003       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:27:02.811559       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:27:32.812885       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:27:32.813012       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:27:32.813017       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:27:32.813115       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:27:34.310478       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:27:34.310602       1 metrics.go:72] Registering metrics
	I1108 10:27:34.310721       1 controller.go:711] "Syncing nftables rules"
	I1108 10:27:42.809649       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:27:42.809706       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0295e0bdcf0cdacbf57678694f0a8520ac6f5b4d6434b6c434710db93ce70d5f] <==
	I1108 10:28:00.857250       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:28:00.885911       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:28:00.895548       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:28:00.895587       1 policy_source.go:240] refreshing policies
	I1108 10:28:00.910004       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:28:00.925751       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:28:00.946371       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:28:00.947113       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 10:28:00.947223       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:28:00.947285       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:28:00.948538       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 10:28:00.951071       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:28:00.962345       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:28:00.948586       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:28:00.948595       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:28:00.948944       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:28:00.970941       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1108 10:28:00.969444       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:28:00.985187       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:28:01.464131       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:28:02.084487       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:28:03.487690       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:28:03.779098       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:28:03.831655       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:28:03.878861       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [4c21fbaf9d079fb5c4cbd03ca8e0149295b10f764ae1c6826063a0516b80ba46] <==
	W1108 10:27:47.483871       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.483912       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.483954       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484057       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484140       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484218       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484291       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484342       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484412       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484508       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484574       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484645       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484704       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484755       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484822       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484892       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.484973       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485057       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485162       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485224       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485280       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485428       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485506       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1108 10:27:47.485577       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7a08c37ef37992bde0d0bd0f71fdddbca47883b01dd90e96da703efd35f23fd8] <==
	I1108 10:27:01.237273       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:27:01.240666       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-343192" podCIDRs=["10.244.0.0/24"]
	I1108 10:27:01.242074       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:27:01.250348       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:27:01.254525       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 10:27:01.259092       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 10:27:01.261356       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:27:01.268534       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:27:01.268645       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:27:01.268791       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:27:01.268900       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:27:01.269310       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:27:01.269883       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:27:01.269966       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 10:27:01.269978       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:27:01.269994       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 10:27:01.271344       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:27:01.271404       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:27:01.272629       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:27:01.272699       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 10:27:01.272711       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:27:01.272719       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:27:01.273151       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:27:01.292759       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:27:46.230277       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [f4c13d129af5682e4f6b5351993a0d0a33abbe9b14c9824e218a1f5e82c3e981] <==
	I1108 10:28:03.476613       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 10:28:03.476653       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:28:03.476522       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:28:03.476716       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 10:28:03.476763       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:28:03.476877       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:28:03.476987       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-343192"
	I1108 10:28:03.476651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 10:28:03.477130       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:28:03.486587       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:28:03.486619       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:28:03.486627       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:28:03.486717       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:28:03.488197       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:28:03.491331       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 10:28:03.491425       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 10:28:03.491447       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 10:28:03.491463       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 10:28:03.491469       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 10:28:03.497996       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:28:03.514283       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:28:03.514255       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:28:03.515071       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:28:03.522542       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:28:03.523807       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cbed26c9cc82d142d3d895dc7635d0efb73e033cb99b08450139b3c5de56c054] <==
	I1108 10:27:02.593489       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:27:02.694365       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:27:02.795358       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:27:02.795395       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:27:02.795471       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:27:02.879224       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:27:02.879280       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:27:02.891902       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:27:02.892201       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:27:02.892224       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:27:02.893739       1 config.go:200] "Starting service config controller"
	I1108 10:27:02.893769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:27:02.893787       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:27:02.893791       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:27:02.893801       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:27:02.893804       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:27:02.894394       1 config.go:309] "Starting node config controller"
	I1108 10:27:02.894412       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:27:02.894418       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:27:02.993925       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:27:02.993954       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 10:27:02.993935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [cf7214c160cf22e17daebadc0005ad9cfb7ddc4d3bab50520210d7d64d6476bb] <==
	I1108 10:27:57.397150       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:27:59.126795       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:28:01.061986       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:28:01.062031       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:28:01.063024       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:28:01.106889       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:28:01.107009       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:28:01.117065       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:28:01.117416       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:28:01.117433       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:28:01.125284       1 config.go:200] "Starting service config controller"
	I1108 10:28:01.125369       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:28:01.125409       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:28:01.125437       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:28:01.125476       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:28:01.125502       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:28:01.126472       1 config.go:309] "Starting node config controller"
	I1108 10:28:01.126529       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:28:01.126537       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:28:01.226475       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:28:01.226587       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:28:01.226602       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6cf1df7c69fa46c783c4d0d0ed7275b2f7575903b38be95723c5fadb80a5adb2] <==
	E1108 10:26:54.623148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:26:54.623182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:26:54.623237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:26:54.623281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:26:54.623325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:26:54.626628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:26:54.631099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:26:54.631187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:26:54.631246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:26:54.631308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:26:54.631416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:26:54.631465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:26:54.631507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:26:54.631573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:26:54.635156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:26:54.635234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:26:55.460090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:26:55.633247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1108 10:26:58.106700       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:27:47.465103       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1108 10:27:47.465132       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1108 10:27:47.465168       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1108 10:27:47.465209       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:27:47.465960       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1108 10:27:47.466000       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d7a533806da9a332f1645c3360d5c4237a84729469ae7cb42a33daa107441f86] <==
	I1108 10:27:57.204803       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:28:00.980856       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:28:00.980891       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:28:00.999852       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:28:01.000025       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:28:01.000082       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:28:01.000152       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:28:01.001745       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:28:01.006822       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:28:01.006929       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:28:01.006965       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:28:01.100647       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:28:01.107408       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:28:01.107533       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:27:54 pause-343192 kubelet[1316]: E1108 10:27:54.974817    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z4htg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ccbca0f1-a4f6-4bdb-91f4-b4eb718ee497" pod="kube-system/coredns-66bc5c9577-z4htg"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: I1108 10:27:55.006988    1316 scope.go:117] "RemoveContainer" containerID="4c21fbaf9d079fb5c4cbd03ca8e0149295b10f764ae1c6826063a0516b80ba46"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.007931    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z4htg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ccbca0f1-a4f6-4bdb-91f4-b4eb718ee497" pod="kube-system/coredns-66bc5c9577-z4htg"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.008480    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a017b135be5dbd8844db0dbb7371c28d" pod="kube-system/etcd-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.008797    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="80cc6e29e92adf75398fa57125331d6f" pod="kube-system/kube-scheduler-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.009107    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0fb6837cb311c408ba2c0a7149a4c333" pod="kube-system/kube-apiserver-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.009390    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a331d88b5b4a93099e5c6ac0fa526396" pod="kube-system/kube-controller-manager-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.009664    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-5dl8w\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e6e7ac85-7324-4cb4-955e-95b1709547a2" pod="kube-system/kindnet-5dl8w"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: I1108 10:27:55.012883    1316 scope.go:117] "RemoveContainer" containerID="cbed26c9cc82d142d3d895dc7635d0efb73e033cb99b08450139b3c5de56c054"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.016144    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-5dl8w\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e6e7ac85-7324-4cb4-955e-95b1709547a2" pod="kube-system/kindnet-5dl8w"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.016463    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-774lt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="840433f1-4620-41e8-80eb-4190421a0b49" pod="kube-system/kube-proxy-774lt"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.022733    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z4htg\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ccbca0f1-a4f6-4bdb-91f4-b4eb718ee497" pod="kube-system/coredns-66bc5c9577-z4htg"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.024868    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a017b135be5dbd8844db0dbb7371c28d" pod="kube-system/etcd-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.025271    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="80cc6e29e92adf75398fa57125331d6f" pod="kube-system/kube-scheduler-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.025566    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0fb6837cb311c408ba2c0a7149a4c333" pod="kube-system/kube-apiserver-pause-343192"
	Nov 08 10:27:55 pause-343192 kubelet[1316]: E1108 10:27:55.025867    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-343192\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a331d88b5b4a93099e5c6ac0fa526396" pod="kube-system/kube-controller-manager-pause-343192"
	Nov 08 10:28:00 pause-343192 kubelet[1316]: E1108 10:28:00.668630    1316 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-343192\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-343192' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 08 10:28:00 pause-343192 kubelet[1316]: E1108 10:28:00.669148    1316 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-343192\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-343192' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 08 10:28:00 pause-343192 kubelet[1316]: E1108 10:28:00.669270    1316 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-343192\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-343192' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 08 10:28:00 pause-343192 kubelet[1316]: E1108 10:28:00.681119    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-343192\" is forbidden: User \"system:node:pause-343192\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-343192' and this object" podUID="a017b135be5dbd8844db0dbb7371c28d" pod="kube-system/etcd-pause-343192"
	Nov 08 10:28:00 pause-343192 kubelet[1316]: E1108 10:28:00.801550    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-343192\" is forbidden: User \"system:node:pause-343192\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-343192' and this object" podUID="80cc6e29e92adf75398fa57125331d6f" pod="kube-system/kube-scheduler-pause-343192"
	Nov 08 10:28:06 pause-343192 kubelet[1316]: W1108 10:28:06.947523    1316 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 08 10:28:16 pause-343192 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:28:16 pause-343192 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:28:16 pause-343192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-343192 -n pause-343192
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-343192 -n pause-343192: exit status 2 (348.725882ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-343192 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1108 10:28:22.712740 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-171136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-171136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.126389ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:31:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-171136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-171136 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-171136 describe deploy/metrics-server -n kube-system: exit status 1 (90.659623ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-171136 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-171136
helpers_test.go:243: (dbg) docker inspect old-k8s-version-171136:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d",
	        "Created": "2025-11-08T10:30:49.022889439Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1205785,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:30:49.090720447Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/hosts",
	        "LogPath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d-json.log",
	        "Name": "/old-k8s-version-171136",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-171136:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-171136",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d",
	                "LowerDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-171136",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-171136/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-171136",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-171136",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-171136",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9144842113dd78b006bad45b8dff3064c54e5bc196e59aba502adcc0e251ea1",
	            "SandboxKey": "/var/run/docker/netns/b9144842113d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34507"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34508"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34511"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34509"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34510"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-171136": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:7a:81:df:47:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de4af9df12e1c8f538a1e008be00be15053361dbab11b5398b5ceb5166430671",
	                    "EndpointID": "e7b068957db75edaae0efafc397c984db99eafa320f4d87cf3a15791664f32cb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-171136",
	                        "b7cf45de166d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-171136 -n old-k8s-version-171136
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-171136 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-171136 logs -n 25: (1.19217497s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-731120 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo containerd config dump                                                                                                                                                                                                  │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo crio config                                                                                                                                                                                                             │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ delete  │ -p cilium-731120                                                                                                                                                                                                                              │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p force-systemd-env-680693 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-680693  │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ delete  │ -p kubernetes-upgrade-666491                                                                                                                                                                                                                  │ kubernetes-upgrade-666491 │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-837698    │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p force-systemd-env-680693                                                                                                                                                                                                                   │ force-systemd-env-680693  │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p cert-options-517657 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:30 UTC │
	│ ssh     │ cert-options-517657 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ ssh     │ -p cert-options-517657 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-517657                                                                                                                                                                                                                        │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-171136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:30:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:30:42.961443 1205394 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:30:42.961632 1205394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:30:42.961638 1205394 out.go:374] Setting ErrFile to fd 2...
	I1108 10:30:42.961643 1205394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:30:42.961910 1205394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:30:42.962343 1205394 out.go:368] Setting JSON to false
	I1108 10:30:42.963254 1205394 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33188,"bootTime":1762564655,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:30:42.963334 1205394 start.go:143] virtualization:  
	I1108 10:30:42.966843 1205394 out.go:179] * [old-k8s-version-171136] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:30:42.971210 1205394 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:30:42.971353 1205394 notify.go:221] Checking for updates...
	I1108 10:30:42.977572 1205394 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:30:42.980941 1205394 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:30:42.984018 1205394 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:30:42.987030 1205394 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:30:42.990052 1205394 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:30:42.993590 1205394 config.go:182] Loaded profile config "cert-expiration-837698": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:30:42.993738 1205394 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:30:43.032663 1205394 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:30:43.032799 1205394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:30:43.093217 1205394 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:30:43.083203389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:30:43.093337 1205394 docker.go:319] overlay module found
	I1108 10:30:43.096624 1205394 out.go:179] * Using the docker driver based on user configuration
	I1108 10:30:43.099535 1205394 start.go:309] selected driver: docker
	I1108 10:30:43.099554 1205394 start.go:930] validating driver "docker" against <nil>
	I1108 10:30:43.099569 1205394 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:30:43.100333 1205394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:30:43.163954 1205394 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:30:43.153198895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:30:43.164193 1205394 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:30:43.164432 1205394 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:30:43.167589 1205394 out.go:179] * Using Docker driver with root privileges
	I1108 10:30:43.170537 1205394 cni.go:84] Creating CNI manager for ""
	I1108 10:30:43.170623 1205394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:30:43.170636 1205394 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:30:43.170745 1205394 start.go:353] cluster config:
	{Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:30:43.177489 1205394 out.go:179] * Starting "old-k8s-version-171136" primary control-plane node in "old-k8s-version-171136" cluster
	I1108 10:30:43.180541 1205394 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:30:43.183584 1205394 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:30:43.186546 1205394 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:30:43.186607 1205394 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1108 10:30:43.186622 1205394 cache.go:59] Caching tarball of preloaded images
	I1108 10:30:43.186633 1205394 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:30:43.186705 1205394 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:30:43.186715 1205394 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1108 10:30:43.186833 1205394 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/config.json ...
	I1108 10:30:43.186852 1205394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/config.json: {Name:mkcdd98c35444bab971e7f239f1a8df630ea24c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:30:43.206265 1205394 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:30:43.206291 1205394 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:30:43.206303 1205394 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:30:43.206325 1205394 start.go:360] acquireMachinesLock for old-k8s-version-171136: {Name:mk3d8c83478e2975fc25a9dafdc0d687aa9eb7c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:30:43.206425 1205394 start.go:364] duration metric: took 79.784µs to acquireMachinesLock for "old-k8s-version-171136"
	I1108 10:30:43.206456 1205394 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:30:43.206526 1205394 start.go:125] createHost starting for "" (driver="docker")
	I1108 10:30:43.209928 1205394 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:30:43.210154 1205394 start.go:159] libmachine.API.Create for "old-k8s-version-171136" (driver="docker")
	I1108 10:30:43.210187 1205394 client.go:173] LocalClient.Create starting
	I1108 10:30:43.210555 1205394 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem
	I1108 10:30:43.210602 1205394 main.go:143] libmachine: Decoding PEM data...
	I1108 10:30:43.210620 1205394 main.go:143] libmachine: Parsing certificate...
	I1108 10:30:43.210687 1205394 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem
	I1108 10:30:43.210715 1205394 main.go:143] libmachine: Decoding PEM data...
	I1108 10:30:43.210728 1205394 main.go:143] libmachine: Parsing certificate...
	I1108 10:30:43.211086 1205394 cli_runner.go:164] Run: docker network inspect old-k8s-version-171136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:30:43.226564 1205394 cli_runner.go:211] docker network inspect old-k8s-version-171136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:30:43.226651 1205394 network_create.go:284] running [docker network inspect old-k8s-version-171136] to gather additional debugging logs...
	I1108 10:30:43.226669 1205394 cli_runner.go:164] Run: docker network inspect old-k8s-version-171136
	W1108 10:30:43.242731 1205394 cli_runner.go:211] docker network inspect old-k8s-version-171136 returned with exit code 1
	I1108 10:30:43.242767 1205394 network_create.go:287] error running [docker network inspect old-k8s-version-171136]: docker network inspect old-k8s-version-171136: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-171136 not found
	I1108 10:30:43.242795 1205394 network_create.go:289] output of [docker network inspect old-k8s-version-171136]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-171136 not found
	
	** /stderr **
	I1108 10:30:43.242892 1205394 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:30:43.278407 1205394 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f127b1978c3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:c7:37:65:8c:96} reservation:<nil>}
	I1108 10:30:43.278686 1205394 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b98bf73d2e94 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:99:be:46:ea:86} reservation:<nil>}
	I1108 10:30:43.278993 1205394 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c4df73992be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:ad:c1:c0:ea:6d} reservation:<nil>}
	I1108 10:30:43.279320 1205394 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-77aa48145c0f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7e:0f:d8:d7:ae:a4} reservation:<nil>}
	I1108 10:30:43.279810 1205394 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001973940}
	I1108 10:30:43.279842 1205394 network_create.go:124] attempt to create docker network old-k8s-version-171136 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 10:30:43.279905 1205394 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-171136 old-k8s-version-171136
	I1108 10:30:43.340214 1205394 network_create.go:108] docker network old-k8s-version-171136 192.168.85.0/24 created
	I1108 10:30:43.340246 1205394 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-171136" container
	I1108 10:30:43.340318 1205394 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:30:43.356645 1205394 cli_runner.go:164] Run: docker volume create old-k8s-version-171136 --label name.minikube.sigs.k8s.io=old-k8s-version-171136 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:30:43.375077 1205394 oci.go:103] Successfully created a docker volume old-k8s-version-171136
	I1108 10:30:43.375169 1205394 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-171136-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-171136 --entrypoint /usr/bin/test -v old-k8s-version-171136:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:30:43.922804 1205394 oci.go:107] Successfully prepared a docker volume old-k8s-version-171136
	I1108 10:30:43.922858 1205394 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:30:43.922879 1205394 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:30:43.922957 1205394 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-171136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 10:30:48.951472 1205394 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-171136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (5.028469151s)
	I1108 10:30:48.951508 1205394 kic.go:203] duration metric: took 5.028625421s to extract preloaded images to volume ...
	W1108 10:30:48.951660 1205394 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:30:48.951768 1205394 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:30:49.007699 1205394 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-171136 --name old-k8s-version-171136 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-171136 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-171136 --network old-k8s-version-171136 --ip 192.168.85.2 --volume old-k8s-version-171136:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:30:49.315527 1205394 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Running}}
	I1108 10:30:49.340908 1205394 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:30:49.365335 1205394 cli_runner.go:164] Run: docker exec old-k8s-version-171136 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:30:49.419630 1205394 oci.go:144] the created container "old-k8s-version-171136" has a running status.
	I1108 10:30:49.419658 1205394 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa...
	I1108 10:30:50.237973 1205394 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:30:50.259132 1205394 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:30:50.282618 1205394 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:30:50.282638 1205394 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-171136 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:30:50.322580 1205394 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:30:50.341275 1205394 machine.go:94] provisionDockerMachine start ...
	I1108 10:30:50.341385 1205394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:30:50.358366 1205394 main.go:143] libmachine: Using SSH client type: native
	I1108 10:30:50.358710 1205394 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34507 <nil> <nil>}
	I1108 10:30:50.358720 1205394 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:30:50.359505 1205394 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:30:53.512083 1205394 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171136
	
	I1108 10:30:53.512110 1205394 ubuntu.go:182] provisioning hostname "old-k8s-version-171136"
	I1108 10:30:53.512184 1205394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:30:53.530589 1205394 main.go:143] libmachine: Using SSH client type: native
	I1108 10:30:53.530943 1205394 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34507 <nil> <nil>}
	I1108 10:30:53.530989 1205394 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171136 && echo "old-k8s-version-171136" | sudo tee /etc/hostname
	I1108 10:30:53.694266 1205394 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171136
	
	I1108 10:30:53.694349 1205394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:30:53.711725 1205394 main.go:143] libmachine: Using SSH client type: native
	I1108 10:30:53.712035 1205394 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34507 <nil> <nil>}
	I1108 10:30:53.712059 1205394 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171136/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:30:53.860777 1205394 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:30:53.860806 1205394 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:30:53.860843 1205394 ubuntu.go:190] setting up certificates
	I1108 10:30:53.860856 1205394 provision.go:84] configureAuth start
	I1108 10:30:53.860938 1205394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-171136
	I1108 10:30:53.878217 1205394 provision.go:143] copyHostCerts
	I1108 10:30:53.878286 1205394 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:30:53.878303 1205394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:30:53.878381 1205394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:30:53.878484 1205394 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:30:53.878493 1205394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:30:53.878522 1205394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:30:53.878590 1205394 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:30:53.878601 1205394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:30:53.878628 1205394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:30:53.878691 1205394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171136 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-171136]
	I1108 10:30:54.209551 1205394 provision.go:177] copyRemoteCerts
	I1108 10:30:54.209619 1205394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:30:54.209698 1205394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:30:54.230912 1205394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34507 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:30:54.340593 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1108 10:30:54.360634 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:30:54.379648 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:30:54.398587 1205394 provision.go:87] duration metric: took 537.705376ms to configureAuth
	I1108 10:30:54.398614 1205394 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:30:54.398803 1205394 config.go:182] Loaded profile config "old-k8s-version-171136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:30:54.398918 1205394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:30:54.417764 1205394 main.go:143] libmachine: Using SSH client type: native
	I1108 10:30:54.418070 1205394 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34507 <nil> <nil>}
	I1108 10:30:54.418090 1205394 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:30:54.678680 1205394 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:30:54.678768 1205394 machine.go:97] duration metric: took 4.337470638s to provisionDockerMachine
	I1108 10:30:54.678793 1205394 client.go:176] duration metric: took 11.468598849s to LocalClient.Create
	I1108 10:30:54.678841 1205394 start.go:167] duration metric: took 11.468687141s to libmachine.API.Create "old-k8s-version-171136"
	I1108 10:30:54.678864 1205394 start.go:293] postStartSetup for "old-k8s-version-171136" (driver="docker")
	I1108 10:30:54.678902 1205394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:30:54.678985 1205394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:30:54.679063 1205394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:30:54.696273 1205394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34507 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:30:54.801063 1205394 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:30:54.804430 1205394 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:30:54.804486 1205394 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:30:54.804498 1205394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:30:54.804553 1205394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:30:54.804651 1205394 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:30:54.804766 1205394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:30:54.812491 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:30:54.830450 1205394 start.go:296] duration metric: took 151.538718ms for postStartSetup
	I1108 10:30:54.830832 1205394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-171136
	I1108 10:30:54.848820 1205394 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/config.json ...
	I1108 10:30:54.849129 1205394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:30:54.849183 1205394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:30:54.870340 1205394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34507 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:30:54.977576 1205394 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:30:54.982466 1205394 start.go:128] duration metric: took 11.775923624s to createHost
	I1108 10:30:54.982492 1205394 start.go:83] releasing machines lock for "old-k8s-version-171136", held for 11.776053235s
	I1108 10:30:54.982568 1205394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-171136
	I1108 10:30:54.999361 1205394 ssh_runner.go:195] Run: cat /version.json
	I1108 10:30:54.999427 1205394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:30:54.999683 1205394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:30:54.999769 1205394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:30:55.037831 1205394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34507 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:30:55.044142 1205394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34507 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:30:55.279068 1205394 ssh_runner.go:195] Run: systemctl --version
	I1108 10:30:55.287099 1205394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:30:55.330184 1205394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:30:55.334587 1205394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:30:55.334675 1205394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:30:55.365506 1205394 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:30:55.365588 1205394 start.go:496] detecting cgroup driver to use...
	I1108 10:30:55.365636 1205394 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:30:55.365696 1205394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:30:55.385266 1205394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:30:55.403936 1205394 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:30:55.404009 1205394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:30:55.424691 1205394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:30:55.446935 1205394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:30:55.570406 1205394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:30:55.693257 1205394 docker.go:234] disabling docker service ...
	I1108 10:30:55.693342 1205394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:30:55.716650 1205394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:30:55.730069 1205394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:30:55.843319 1205394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:30:55.985946 1205394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:30:55.998600 1205394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:30:56.018720 1205394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 10:30:56.018808 1205394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:30:56.028139 1205394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:30:56.028216 1205394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:30:56.038517 1205394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:30:56.047958 1205394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:30:56.057254 1205394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:30:56.065793 1205394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:30:56.075147 1205394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:30:56.089595 1205394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:30:56.098594 1205394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:30:56.106118 1205394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:30:56.113586 1205394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:30:56.239484 1205394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:30:56.401018 1205394 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:30:56.401138 1205394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:30:56.405234 1205394 start.go:564] Will wait 60s for crictl version
	I1108 10:30:56.405351 1205394 ssh_runner.go:195] Run: which crictl
	I1108 10:30:56.408913 1205394 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:30:56.438146 1205394 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:30:56.438246 1205394 ssh_runner.go:195] Run: crio --version
	I1108 10:30:56.471485 1205394 ssh_runner.go:195] Run: crio --version
	I1108 10:30:56.505545 1205394 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1108 10:30:56.508550 1205394 cli_runner.go:164] Run: docker network inspect old-k8s-version-171136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:30:56.525618 1205394 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:30:56.529603 1205394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:30:56.539533 1205394 kubeadm.go:884] updating cluster {Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:30:56.539657 1205394 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:30:56.539716 1205394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:30:56.575790 1205394 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:30:56.575812 1205394 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:30:56.575869 1205394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:30:56.601728 1205394 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:30:56.601749 1205394 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:30:56.601758 1205394 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1108 10:30:56.601856 1205394 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-171136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:30:56.601941 1205394 ssh_runner.go:195] Run: crio config
	I1108 10:30:56.677944 1205394 cni.go:84] Creating CNI manager for ""
	I1108 10:30:56.677966 1205394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:30:56.677990 1205394 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:30:56.678012 1205394 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-171136 NodeName:old-k8s-version-171136 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:30:56.678155 1205394 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-171136"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:30:56.678231 1205394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1108 10:30:56.685979 1205394 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:30:56.686048 1205394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:30:56.693712 1205394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1108 10:30:56.706885 1205394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:30:56.720255 1205394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1108 10:30:56.733252 1205394 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:30:56.736927 1205394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:30:56.746832 1205394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:30:56.862154 1205394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:30:56.879744 1205394 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136 for IP: 192.168.85.2
	I1108 10:30:56.879818 1205394 certs.go:195] generating shared ca certs ...
	I1108 10:30:56.879858 1205394 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:30:56.880036 1205394 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:30:56.880124 1205394 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:30:56.880160 1205394 certs.go:257] generating profile certs ...
	I1108 10:30:56.880237 1205394 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.key
	I1108 10:30:56.880277 1205394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt with IP's: []
	I1108 10:30:57.371252 1205394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt ...
	I1108 10:30:57.371287 1205394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: {Name:mke79028b713e44c0ca886cfa7c94588b37c45ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:30:57.371522 1205394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.key ...
	I1108 10:30:57.371555 1205394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.key: {Name:mk817ff09e9747a3c6862de92aa6e51881dd6355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:30:57.371653 1205394 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.key.3f7b60cf
	I1108 10:30:57.371672 1205394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.crt.3f7b60cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1108 10:30:57.746296 1205394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.crt.3f7b60cf ...
	I1108 10:30:57.746328 1205394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.crt.3f7b60cf: {Name:mkdfe6982fae121430e91a39572b02411d0ce807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:30:57.746514 1205394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.key.3f7b60cf ...
	I1108 10:30:57.746530 1205394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.key.3f7b60cf: {Name:mkb0cf066e459ff6fa8e97e7e475d102620784a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:30:57.746611 1205394 certs.go:382] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.crt.3f7b60cf -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.crt
	I1108 10:30:57.746718 1205394 certs.go:386] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.key.3f7b60cf -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.key
	I1108 10:30:57.746786 1205394 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.key
	I1108 10:30:57.746809 1205394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.crt with IP's: []
	I1108 10:30:59.156786 1205394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.crt ...
	I1108 10:30:59.156817 1205394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.crt: {Name:mkfc517dbe5f802957af613b86056fb11471c4a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:30:59.157001 1205394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.key ...
	I1108 10:30:59.157017 1205394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.key: {Name:mkcb218e0af13c96eb781a6f4ae38a3ac4ca15c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:30:59.157216 1205394 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:30:59.157256 1205394 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:30:59.157274 1205394 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:30:59.157298 1205394 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:30:59.157326 1205394 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:30:59.157387 1205394 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:30:59.157435 1205394 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:30:59.157989 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:30:59.175637 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:30:59.193028 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:30:59.210824 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:30:59.229134 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1108 10:30:59.247080 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:30:59.266030 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:30:59.287612 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:30:59.306658 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:30:59.324988 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:30:59.343526 1205394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:30:59.363859 1205394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:30:59.377534 1205394 ssh_runner.go:195] Run: openssl version
	I1108 10:30:59.384265 1205394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:30:59.392784 1205394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:30:59.396202 1205394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:30:59.396293 1205394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:30:59.437422 1205394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:30:59.445900 1205394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:30:59.455135 1205394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:30:59.459388 1205394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:30:59.459458 1205394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:30:59.506120 1205394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:30:59.515084 1205394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:30:59.523620 1205394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:30:59.528024 1205394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:30:59.528101 1205394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:30:59.569691 1205394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:30:59.578528 1205394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:30:59.582080 1205394 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:30:59.582173 1205394 kubeadm.go:401] StartCluster: {Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:30:59.582258 1205394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:30:59.582316 1205394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:30:59.611929 1205394 cri.go:89] found id: ""
	I1108 10:30:59.612021 1205394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:30:59.619953 1205394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:30:59.628013 1205394 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:30:59.628103 1205394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:30:59.636115 1205394 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:30:59.636134 1205394 kubeadm.go:158] found existing configuration files:
	
	I1108 10:30:59.636184 1205394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:30:59.643904 1205394 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:30:59.643987 1205394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:30:59.651389 1205394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:30:59.659477 1205394 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:30:59.659542 1205394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:30:59.667118 1205394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:30:59.674823 1205394 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:30:59.674891 1205394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:30:59.682360 1205394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:30:59.690509 1205394 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:30:59.690628 1205394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:30:59.705100 1205394 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:30:59.791930 1205394 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:30:59.882220 1205394 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 10:31:15.888424 1205394 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1108 10:31:15.888517 1205394 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:31:15.888610 1205394 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:31:15.888677 1205394 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:31:15.888726 1205394 kubeadm.go:319] OS: Linux
	I1108 10:31:15.888774 1205394 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:31:15.888825 1205394 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:31:15.888874 1205394 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:31:15.888924 1205394 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:31:15.888975 1205394 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:31:15.889026 1205394 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:31:15.889074 1205394 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:31:15.889131 1205394 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:31:15.889181 1205394 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:31:15.889256 1205394 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:31:15.889360 1205394 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:31:15.889456 1205394 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 10:31:15.889521 1205394 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 10:31:15.892585 1205394 out.go:252]   - Generating certificates and keys ...
	I1108 10:31:15.892710 1205394 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:31:15.892804 1205394 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:31:15.892886 1205394 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:31:15.892950 1205394 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:31:15.893018 1205394 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:31:15.893075 1205394 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:31:15.893136 1205394 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:31:15.893276 1205394 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-171136] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:31:15.893337 1205394 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:31:15.893475 1205394 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-171136] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 10:31:15.893548 1205394 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:31:15.893619 1205394 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:31:15.893669 1205394 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:31:15.893731 1205394 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:31:15.893788 1205394 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:31:15.893847 1205394 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:31:15.893928 1205394 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:31:15.893992 1205394 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:31:15.894086 1205394 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:31:15.894165 1205394 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:31:15.897291 1205394 out.go:252]   - Booting up control plane ...
	I1108 10:31:15.897413 1205394 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:31:15.897501 1205394 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:31:15.897577 1205394 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:31:15.897719 1205394 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:31:15.897822 1205394 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:31:15.897874 1205394 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:31:15.898062 1205394 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 10:31:15.898149 1205394 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.504168 seconds
	I1108 10:31:15.898267 1205394 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:31:15.898406 1205394 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:31:15.898471 1205394 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:31:15.898682 1205394 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-171136 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:31:15.898744 1205394 kubeadm.go:319] [bootstrap-token] Using token: fwu9q8.sspja4s45iwbr1b1
	I1108 10:31:15.901617 1205394 out.go:252]   - Configuring RBAC rules ...
	I1108 10:31:15.901764 1205394 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:31:15.901869 1205394 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:31:15.902049 1205394 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:31:15.902230 1205394 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:31:15.902373 1205394 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:31:15.902508 1205394 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:31:15.902670 1205394 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:31:15.902741 1205394 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:31:15.902810 1205394 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:31:15.902834 1205394 kubeadm.go:319] 
	I1108 10:31:15.902928 1205394 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:31:15.902942 1205394 kubeadm.go:319] 
	I1108 10:31:15.903036 1205394 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:31:15.903047 1205394 kubeadm.go:319] 
	I1108 10:31:15.903075 1205394 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:31:15.903152 1205394 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:31:15.903222 1205394 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:31:15.903232 1205394 kubeadm.go:319] 
	I1108 10:31:15.903293 1205394 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:31:15.903302 1205394 kubeadm.go:319] 
	I1108 10:31:15.903356 1205394 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:31:15.903365 1205394 kubeadm.go:319] 
	I1108 10:31:15.903424 1205394 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:31:15.903514 1205394 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:31:15.903596 1205394 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:31:15.903605 1205394 kubeadm.go:319] 
	I1108 10:31:15.903700 1205394 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:31:15.903811 1205394 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:31:15.903821 1205394 kubeadm.go:319] 
	I1108 10:31:15.903923 1205394 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fwu9q8.sspja4s45iwbr1b1 \
	I1108 10:31:15.904044 1205394 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 \
	I1108 10:31:15.904071 1205394 kubeadm.go:319] 	--control-plane 
	I1108 10:31:15.904080 1205394 kubeadm.go:319] 
	I1108 10:31:15.904177 1205394 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:31:15.904188 1205394 kubeadm.go:319] 
	I1108 10:31:15.904280 1205394 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fwu9q8.sspja4s45iwbr1b1 \
	I1108 10:31:15.904413 1205394 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 
	I1108 10:31:15.904425 1205394 cni.go:84] Creating CNI manager for ""
	I1108 10:31:15.904432 1205394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:31:15.907519 1205394 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 10:31:15.912151 1205394 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:31:15.919072 1205394 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1108 10:31:15.919099 1205394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:31:15.955658 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:31:16.952663 1205394 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:31:16.952813 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:16.952888 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-171136 minikube.k8s.io/updated_at=2025_11_08T10_31_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=old-k8s-version-171136 minikube.k8s.io/primary=true
	I1108 10:31:17.093089 1205394 ops.go:34] apiserver oom_adj: -16
	I1108 10:31:17.093187 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:17.593642 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:18.094079 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:18.593705 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:19.093300 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:19.593321 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:20.093862 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:20.593390 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:21.093318 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:21.593808 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:22.093378 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:22.593269 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:23.093276 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:23.593284 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:24.093287 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:24.593385 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:25.093636 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:25.593815 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:26.093298 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:26.593501 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:27.093792 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:27.593367 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:28.094251 1205394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:31:28.246741 1205394 kubeadm.go:1114] duration metric: took 11.293973312s to wait for elevateKubeSystemPrivileges
	I1108 10:31:28.246773 1205394 kubeadm.go:403] duration metric: took 28.664602899s to StartCluster
	I1108 10:31:28.246790 1205394 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:31:28.246849 1205394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:31:28.247809 1205394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:31:28.248018 1205394 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:31:28.248100 1205394 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:31:28.248334 1205394 config.go:182] Loaded profile config "old-k8s-version-171136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:31:28.248376 1205394 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:31:28.248455 1205394 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-171136"
	I1108 10:31:28.248481 1205394 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-171136"
	I1108 10:31:28.248506 1205394 host.go:66] Checking if "old-k8s-version-171136" exists ...
	I1108 10:31:28.249156 1205394 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:31:28.249303 1205394 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-171136"
	I1108 10:31:28.249332 1205394 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-171136"
	I1108 10:31:28.249584 1205394 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:31:28.252592 1205394 out.go:179] * Verifying Kubernetes components...
	I1108 10:31:28.258422 1205394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:31:28.285113 1205394 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:31:28.288048 1205394 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:31:28.288070 1205394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:31:28.288136 1205394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:31:28.290565 1205394 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-171136"
	I1108 10:31:28.290603 1205394 host.go:66] Checking if "old-k8s-version-171136" exists ...
	I1108 10:31:28.291018 1205394 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:31:28.331838 1205394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34507 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:31:28.336615 1205394 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:31:28.336636 1205394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:31:28.336709 1205394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:31:28.363477 1205394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34507 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:31:28.653415 1205394 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:31:28.653582 1205394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:31:28.663315 1205394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:31:28.663765 1205394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:31:29.585397 1205394 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-171136" to be "Ready" ...
	I1108 10:31:29.585505 1205394 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 10:31:29.930811 1205394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.26699139s)
	I1108 10:31:29.930872 1205394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.267502972s)
	I1108 10:31:29.947508 1205394 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 10:31:29.950558 1205394 addons.go:515] duration metric: took 1.702092126s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 10:31:30.094831 1205394 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-171136" context rescaled to 1 replicas
	W1108 10:31:31.593495 1205394 node_ready.go:57] node "old-k8s-version-171136" has "Ready":"False" status (will retry)
	W1108 10:31:34.088421 1205394 node_ready.go:57] node "old-k8s-version-171136" has "Ready":"False" status (will retry)
	W1108 10:31:36.104192 1205394 node_ready.go:57] node "old-k8s-version-171136" has "Ready":"False" status (will retry)
	W1108 10:31:38.589543 1205394 node_ready.go:57] node "old-k8s-version-171136" has "Ready":"False" status (will retry)
	W1108 10:31:41.088544 1205394 node_ready.go:57] node "old-k8s-version-171136" has "Ready":"False" status (will retry)
	I1108 10:31:42.589291 1205394 node_ready.go:49] node "old-k8s-version-171136" is "Ready"
	I1108 10:31:42.589325 1205394 node_ready.go:38] duration metric: took 13.003894332s for node "old-k8s-version-171136" to be "Ready" ...
	I1108 10:31:42.589339 1205394 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:31:42.589401 1205394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:31:42.602471 1205394 api_server.go:72] duration metric: took 14.354417627s to wait for apiserver process to appear ...
	I1108 10:31:42.602494 1205394 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:31:42.602513 1205394 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:31:42.611127 1205394 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:31:42.613182 1205394 api_server.go:141] control plane version: v1.28.0
	I1108 10:31:42.613209 1205394 api_server.go:131] duration metric: took 10.708282ms to wait for apiserver health ...
	I1108 10:31:42.613219 1205394 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:31:42.623973 1205394 system_pods.go:59] 8 kube-system pods found
	I1108 10:31:42.624007 1205394 system_pods.go:61] "coredns-5dd5756b68-5m4ph" [08005efc-5866-444b-a834-f1b18d38717c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:31:42.624014 1205394 system_pods.go:61] "etcd-old-k8s-version-171136" [0bf47fe6-f4be-4f1e-adb6-9e157b6b92da] Running
	I1108 10:31:42.624021 1205394 system_pods.go:61] "kindnet-bg4r4" [bc043139-6bce-4061-a3c6-e733d1e90763] Running
	I1108 10:31:42.624026 1205394 system_pods.go:61] "kube-apiserver-old-k8s-version-171136" [05958bfe-f331-4b7b-a251-b6888cb928af] Running
	I1108 10:31:42.624030 1205394 system_pods.go:61] "kube-controller-manager-old-k8s-version-171136" [6e7e2c08-dad2-46e2-a419-96803b5758c8] Running
	I1108 10:31:42.624034 1205394 system_pods.go:61] "kube-proxy-8ml4s" [40f4282d-0202-4179-953a-3fd511afbaa5] Running
	I1108 10:31:42.624041 1205394 system_pods.go:61] "kube-scheduler-old-k8s-version-171136" [dcbcba65-c6f8-45cd-a9fa-af29cd3b4ab6] Running
	I1108 10:31:42.624047 1205394 system_pods.go:61] "storage-provisioner" [66060f62-f048-459b-885f-8fa591cafed6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:31:42.624052 1205394 system_pods.go:74] duration metric: took 10.827531ms to wait for pod list to return data ...
	I1108 10:31:42.624060 1205394 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:31:42.631204 1205394 default_sa.go:45] found service account: "default"
	I1108 10:31:42.631226 1205394 default_sa.go:55] duration metric: took 7.160545ms for default service account to be created ...
	I1108 10:31:42.631236 1205394 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:31:42.637627 1205394 system_pods.go:86] 8 kube-system pods found
	I1108 10:31:42.637663 1205394 system_pods.go:89] "coredns-5dd5756b68-5m4ph" [08005efc-5866-444b-a834-f1b18d38717c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:31:42.637670 1205394 system_pods.go:89] "etcd-old-k8s-version-171136" [0bf47fe6-f4be-4f1e-adb6-9e157b6b92da] Running
	I1108 10:31:42.637677 1205394 system_pods.go:89] "kindnet-bg4r4" [bc043139-6bce-4061-a3c6-e733d1e90763] Running
	I1108 10:31:42.637682 1205394 system_pods.go:89] "kube-apiserver-old-k8s-version-171136" [05958bfe-f331-4b7b-a251-b6888cb928af] Running
	I1108 10:31:42.637687 1205394 system_pods.go:89] "kube-controller-manager-old-k8s-version-171136" [6e7e2c08-dad2-46e2-a419-96803b5758c8] Running
	I1108 10:31:42.637691 1205394 system_pods.go:89] "kube-proxy-8ml4s" [40f4282d-0202-4179-953a-3fd511afbaa5] Running
	I1108 10:31:42.637695 1205394 system_pods.go:89] "kube-scheduler-old-k8s-version-171136" [dcbcba65-c6f8-45cd-a9fa-af29cd3b4ab6] Running
	I1108 10:31:42.637701 1205394 system_pods.go:89] "storage-provisioner" [66060f62-f048-459b-885f-8fa591cafed6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:31:42.637731 1205394 retry.go:31] will retry after 188.382894ms: missing components: kube-dns
	I1108 10:31:42.847200 1205394 system_pods.go:86] 8 kube-system pods found
	I1108 10:31:42.847232 1205394 system_pods.go:89] "coredns-5dd5756b68-5m4ph" [08005efc-5866-444b-a834-f1b18d38717c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:31:42.847239 1205394 system_pods.go:89] "etcd-old-k8s-version-171136" [0bf47fe6-f4be-4f1e-adb6-9e157b6b92da] Running
	I1108 10:31:42.847247 1205394 system_pods.go:89] "kindnet-bg4r4" [bc043139-6bce-4061-a3c6-e733d1e90763] Running
	I1108 10:31:42.847252 1205394 system_pods.go:89] "kube-apiserver-old-k8s-version-171136" [05958bfe-f331-4b7b-a251-b6888cb928af] Running
	I1108 10:31:42.847257 1205394 system_pods.go:89] "kube-controller-manager-old-k8s-version-171136" [6e7e2c08-dad2-46e2-a419-96803b5758c8] Running
	I1108 10:31:42.847261 1205394 system_pods.go:89] "kube-proxy-8ml4s" [40f4282d-0202-4179-953a-3fd511afbaa5] Running
	I1108 10:31:42.847267 1205394 system_pods.go:89] "kube-scheduler-old-k8s-version-171136" [dcbcba65-c6f8-45cd-a9fa-af29cd3b4ab6] Running
	I1108 10:31:42.847273 1205394 system_pods.go:89] "storage-provisioner" [66060f62-f048-459b-885f-8fa591cafed6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:31:42.847287 1205394 retry.go:31] will retry after 269.784795ms: missing components: kube-dns
	I1108 10:31:43.121850 1205394 system_pods.go:86] 8 kube-system pods found
	I1108 10:31:43.121889 1205394 system_pods.go:89] "coredns-5dd5756b68-5m4ph" [08005efc-5866-444b-a834-f1b18d38717c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:31:43.121895 1205394 system_pods.go:89] "etcd-old-k8s-version-171136" [0bf47fe6-f4be-4f1e-adb6-9e157b6b92da] Running
	I1108 10:31:43.121902 1205394 system_pods.go:89] "kindnet-bg4r4" [bc043139-6bce-4061-a3c6-e733d1e90763] Running
	I1108 10:31:43.121907 1205394 system_pods.go:89] "kube-apiserver-old-k8s-version-171136" [05958bfe-f331-4b7b-a251-b6888cb928af] Running
	I1108 10:31:43.121913 1205394 system_pods.go:89] "kube-controller-manager-old-k8s-version-171136" [6e7e2c08-dad2-46e2-a419-96803b5758c8] Running
	I1108 10:31:43.121920 1205394 system_pods.go:89] "kube-proxy-8ml4s" [40f4282d-0202-4179-953a-3fd511afbaa5] Running
	I1108 10:31:43.121924 1205394 system_pods.go:89] "kube-scheduler-old-k8s-version-171136" [dcbcba65-c6f8-45cd-a9fa-af29cd3b4ab6] Running
	I1108 10:31:43.121930 1205394 system_pods.go:89] "storage-provisioner" [66060f62-f048-459b-885f-8fa591cafed6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:31:43.121944 1205394 retry.go:31] will retry after 308.749315ms: missing components: kube-dns
	I1108 10:31:43.435328 1205394 system_pods.go:86] 8 kube-system pods found
	I1108 10:31:43.435359 1205394 system_pods.go:89] "coredns-5dd5756b68-5m4ph" [08005efc-5866-444b-a834-f1b18d38717c] Running
	I1108 10:31:43.435367 1205394 system_pods.go:89] "etcd-old-k8s-version-171136" [0bf47fe6-f4be-4f1e-adb6-9e157b6b92da] Running
	I1108 10:31:43.435371 1205394 system_pods.go:89] "kindnet-bg4r4" [bc043139-6bce-4061-a3c6-e733d1e90763] Running
	I1108 10:31:43.435376 1205394 system_pods.go:89] "kube-apiserver-old-k8s-version-171136" [05958bfe-f331-4b7b-a251-b6888cb928af] Running
	I1108 10:31:43.435415 1205394 system_pods.go:89] "kube-controller-manager-old-k8s-version-171136" [6e7e2c08-dad2-46e2-a419-96803b5758c8] Running
	I1108 10:31:43.435431 1205394 system_pods.go:89] "kube-proxy-8ml4s" [40f4282d-0202-4179-953a-3fd511afbaa5] Running
	I1108 10:31:43.435436 1205394 system_pods.go:89] "kube-scheduler-old-k8s-version-171136" [dcbcba65-c6f8-45cd-a9fa-af29cd3b4ab6] Running
	I1108 10:31:43.435440 1205394 system_pods.go:89] "storage-provisioner" [66060f62-f048-459b-885f-8fa591cafed6] Running
	I1108 10:31:43.435449 1205394 system_pods.go:126] duration metric: took 804.205905ms to wait for k8s-apps to be running ...
	I1108 10:31:43.435460 1205394 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:31:43.435536 1205394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:31:43.448569 1205394 system_svc.go:56] duration metric: took 13.097647ms WaitForService to wait for kubelet
	I1108 10:31:43.448599 1205394 kubeadm.go:587] duration metric: took 15.200552073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:31:43.448624 1205394 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:31:43.451584 1205394 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:31:43.451618 1205394 node_conditions.go:123] node cpu capacity is 2
	I1108 10:31:43.451633 1205394 node_conditions.go:105] duration metric: took 3.002564ms to run NodePressure ...
	I1108 10:31:43.451667 1205394 start.go:242] waiting for startup goroutines ...
	I1108 10:31:43.451684 1205394 start.go:247] waiting for cluster config update ...
	I1108 10:31:43.451697 1205394 start.go:256] writing updated cluster config ...
	I1108 10:31:43.452030 1205394 ssh_runner.go:195] Run: rm -f paused
	I1108 10:31:43.455637 1205394 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:31:43.460157 1205394 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5m4ph" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:43.465645 1205394 pod_ready.go:94] pod "coredns-5dd5756b68-5m4ph" is "Ready"
	I1108 10:31:43.465677 1205394 pod_ready.go:86] duration metric: took 5.494022ms for pod "coredns-5dd5756b68-5m4ph" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:43.468933 1205394 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:43.473871 1205394 pod_ready.go:94] pod "etcd-old-k8s-version-171136" is "Ready"
	I1108 10:31:43.473900 1205394 pod_ready.go:86] duration metric: took 4.939143ms for pod "etcd-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:43.476891 1205394 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:43.481895 1205394 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-171136" is "Ready"
	I1108 10:31:43.481925 1205394 pod_ready.go:86] duration metric: took 5.007301ms for pod "kube-apiserver-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:43.485052 1205394 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:43.859792 1205394 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-171136" is "Ready"
	I1108 10:31:43.859864 1205394 pod_ready.go:86] duration metric: took 374.781304ms for pod "kube-controller-manager-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:44.061352 1205394 pod_ready.go:83] waiting for pod "kube-proxy-8ml4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:44.459643 1205394 pod_ready.go:94] pod "kube-proxy-8ml4s" is "Ready"
	I1108 10:31:44.459672 1205394 pod_ready.go:86] duration metric: took 398.279955ms for pod "kube-proxy-8ml4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:44.660497 1205394 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:45.062333 1205394 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-171136" is "Ready"
	I1108 10:31:45.062372 1205394 pod_ready.go:86] duration metric: took 401.844503ms for pod "kube-scheduler-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:31:45.062388 1205394 pod_ready.go:40] duration metric: took 1.60671586s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:31:45.147632 1205394 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1108 10:31:45.151016 1205394 out.go:203] 
	W1108 10:31:45.153985 1205394 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 10:31:45.158174 1205394 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 10:31:45.173772 1205394 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-171136" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 10:31:42 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:42.91786252Z" level=info msg="Created container 8fc429d9a1c562685f158ec319bc803215c9c2083bae0e05a6f0bb3b778f54e2: kube-system/coredns-5dd5756b68-5m4ph/coredns" id=9669c5a8-6963-489d-8ad6-3cdf4677fc7a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:31:42 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:42.920709454Z" level=info msg="Starting container: 8fc429d9a1c562685f158ec319bc803215c9c2083bae0e05a6f0bb3b778f54e2" id=c8bf14c1-9413-43fb-a2ba-86f07c3f2c64 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:31:42 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:42.934538917Z" level=info msg="Started container" PID=1950 containerID=8fc429d9a1c562685f158ec319bc803215c9c2083bae0e05a6f0bb3b778f54e2 description=kube-system/coredns-5dd5756b68-5m4ph/coredns id=c8bf14c1-9413-43fb-a2ba-86f07c3f2c64 name=/runtime.v1.RuntimeService/StartContainer sandboxID=af137f89d0711627e333b101e9037c27a0f86cd84cb8d9c5b7e3f82b858cb57a
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.793482538Z" level=info msg="Running pod sandbox: default/busybox/POD" id=225c3f9a-bd0e-4c7b-946d-7b4887863eed name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.793549604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.798748678Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a13b60517511652278d12733ee958061cb18e66a712ebd616acfe1b63a10c810 UID:bb27a248-1db0-4b58-a6df-586ba5fd017f NetNS:/var/run/netns/201b59b3-ecaf-4d14-8360-51f166e3be56 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000e7610}] Aliases:map[]}"
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.7989128Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.808800496Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a13b60517511652278d12733ee958061cb18e66a712ebd616acfe1b63a10c810 UID:bb27a248-1db0-4b58-a6df-586ba5fd017f NetNS:/var/run/netns/201b59b3-ecaf-4d14-8360-51f166e3be56 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000e7610}] Aliases:map[]}"
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.809135083Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.813532351Z" level=info msg="Ran pod sandbox a13b60517511652278d12733ee958061cb18e66a712ebd616acfe1b63a10c810 with infra container: default/busybox/POD" id=225c3f9a-bd0e-4c7b-946d-7b4887863eed name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.814598837Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e50984b7-52ca-45b2-88b0-78d7cb0d7cbe name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.814723477Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e50984b7-52ca-45b2-88b0-78d7cb0d7cbe name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.814769055Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e50984b7-52ca-45b2-88b0-78d7cb0d7cbe name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.81748924Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9718952c-9d9a-4e63-9dd2-467d6f9583a5 name=/runtime.v1.ImageService/PullImage
	Nov 08 10:31:45 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:45.820513604Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 10:31:47 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:47.928707166Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9718952c-9d9a-4e63-9dd2-467d6f9583a5 name=/runtime.v1.ImageService/PullImage
	Nov 08 10:31:47 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:47.929612024Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3fc00cbd-d68d-4572-9c12-99852b5cc357 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:31:47 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:47.931289427Z" level=info msg="Creating container: default/busybox/busybox" id=b81fd672-b00b-4008-b26a-b4ad8e6d1d5e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:31:47 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:47.93146086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:31:47 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:47.937534195Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:31:47 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:47.938001995Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:31:47 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:47.953622098Z" level=info msg="Created container 61a9232d44964482811017de984c045302d1136561438034c638933c058fb5e8: default/busybox/busybox" id=b81fd672-b00b-4008-b26a-b4ad8e6d1d5e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:31:47 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:47.956115501Z" level=info msg="Starting container: 61a9232d44964482811017de984c045302d1136561438034c638933c058fb5e8" id=68f8ab3e-0aa3-42f3-a484-93fdec716863 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:31:47 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:47.958006748Z" level=info msg="Started container" PID=2003 containerID=61a9232d44964482811017de984c045302d1136561438034c638933c058fb5e8 description=default/busybox/busybox id=68f8ab3e-0aa3-42f3-a484-93fdec716863 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a13b60517511652278d12733ee958061cb18e66a712ebd616acfe1b63a10c810
	Nov 08 10:31:53 old-k8s-version-171136 crio[843]: time="2025-11-08T10:31:53.678239781Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	61a9232d44964       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   a13b605175116       busybox                                          default
	8fc429d9a1c56       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   af137f89d0711       coredns-5dd5756b68-5m4ph                         kube-system
	5371f9d047027       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   7fe924367a5a7       storage-provisioner                              kube-system
	d0c7031cdd17a       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   b694df94d6ec9       kindnet-bg4r4                                    kube-system
	9b0101385aed9       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   fdba95f902906       kube-proxy-8ml4s                                 kube-system
	c66c9eda4fad2       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      47 seconds ago      Running             kube-controller-manager   0                   fbfc52a816147       kube-controller-manager-old-k8s-version-171136   kube-system
	3784982eabbc8       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   cfe7770d6544e       etcd-old-k8s-version-171136                      kube-system
	21b4bc3abe968       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   4bb74cf8d959c       kube-apiserver-old-k8s-version-171136            kube-system
	01c41bf566ad9       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   db1a3ddf15a18       kube-scheduler-old-k8s-version-171136            kube-system
	
	
	==> coredns [8fc429d9a1c562685f158ec319bc803215c9c2083bae0e05a6f0bb3b778f54e2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37638 - 8746 "HINFO IN 2429745011034168362.1037684028729504595. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036697929s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-171136
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-171136
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=old-k8s-version-171136
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_31_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:31:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-171136
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:31:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:31:46 +0000   Sat, 08 Nov 2025 10:31:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:31:46 +0000   Sat, 08 Nov 2025 10:31:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:31:46 +0000   Sat, 08 Nov 2025 10:31:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:31:46 +0000   Sat, 08 Nov 2025 10:31:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-171136
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                abac0900-0998-47c3-b513-18b6d2fce4e7
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-5m4ph                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-171136                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-bg4r4                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-171136             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-171136    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-8ml4s                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-171136             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node old-k8s-version-171136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-171136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-171136 event: Registered Node old-k8s-version-171136 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-171136 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 10:09] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[ +18.424643] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3784982eabbc809606275d982e13e268b62e861b3e8b1c62f82bfa5efc35c805] <==
	{"level":"info","ts":"2025-11-08T10:31:08.221985Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:31:08.222037Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:31:08.224807Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T10:31:08.228503Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T10:31:08.228599Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T10:31:08.225144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-08T10:31:08.229008Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-08T10:31:08.480215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-08T10:31:08.480334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-08T10:31:08.480386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-08T10:31:08.480447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-08T10:31:08.48048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-08T10:31:08.480525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-08T10:31:08.48056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-08T10:31:08.481738Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:31:08.483149Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-171136 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T10:31:08.483362Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:31:08.485242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-08T10:31:08.486293Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:31:08.49093Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T10:31:08.491052Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-08T10:31:08.49116Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:31:08.491272Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:31:08.491338Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:31:08.505228Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 10:31:55 up  9:14,  0 user,  load average: 2.57, 3.30, 2.73
	Linux old-k8s-version-171136 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d0c7031cdd17a858ae96d26ce0ce9a6dfcee25395e9abf050231691b6d263d89] <==
	I1108 10:31:31.711050       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:31:31.711277       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:31:31.711399       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:31:31.711417       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:31:31.711427       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:31:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:31:31.911382       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:31:31.911450       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:31:31.911488       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:31:31.912003       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 10:31:32.212515       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:31:32.212615       1 metrics.go:72] Registering metrics
	I1108 10:31:32.212705       1 controller.go:711] "Syncing nftables rules"
	I1108 10:31:41.913006       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:31:41.913060       1 main.go:301] handling current node
	I1108 10:31:51.911640       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:31:51.911676       1 main.go:301] handling current node
	
	
	==> kube-apiserver [21b4bc3abe968d1afa7273f621fb443ebc57ce9bfe08f73a5361083812f38648] <==
	I1108 10:31:12.541139       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 10:31:12.542654       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:31:12.588362       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 10:31:12.591519       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 10:31:12.591541       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 10:31:12.591678       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 10:31:12.592297       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 10:31:12.605148       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 10:31:12.627255       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1108 10:31:12.671386       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:31:13.390343       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 10:31:13.396556       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 10:31:13.396637       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:31:14.039203       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:31:14.097950       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:31:14.242093       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 10:31:14.249521       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1108 10:31:14.250923       1 controller.go:624] quota admission added evaluator for: endpoints
	I1108 10:31:14.256182       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:31:14.568092       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 10:31:15.804230       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 10:31:15.821010       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 10:31:15.851542       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1108 10:31:27.876785       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1108 10:31:28.182155       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [c66c9eda4fad240c9495ea90d205fdba45e4993b351e5f10218ac8255ce4f1f0] <==
	I1108 10:31:27.526078       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 10:31:27.569480       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1108 10:31:27.589048       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 10:31:27.882578       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1108 10:31:27.967649       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:31:28.003175       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:31:28.003213       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 10:31:28.204591       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bg4r4"
	I1108 10:31:28.204617       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8ml4s"
	I1108 10:31:28.571561       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rvhcc"
	I1108 10:31:28.629367       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-5m4ph"
	I1108 10:31:28.657505       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="774.572091ms"
	I1108 10:31:28.691086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.532568ms"
	I1108 10:31:28.720997       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.847516ms"
	I1108 10:31:28.721098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.28µs"
	I1108 10:31:29.634745       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1108 10:31:29.665432       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-rvhcc"
	I1108 10:31:29.679650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.690361ms"
	I1108 10:31:29.693290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.575112ms"
	I1108 10:31:29.693379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.099µs"
	I1108 10:31:42.536384       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.09µs"
	I1108 10:31:42.559861       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.483µs"
	I1108 10:31:43.229209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.983338ms"
	I1108 10:31:43.229307       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.948µs"
	I1108 10:31:47.437825       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [9b0101385aed9d5d0723cc7c8fda35bc6deb390fc3359bd7875543fc0e0f0889] <==
	I1108 10:31:28.973849       1 server_others.go:69] "Using iptables proxy"
	I1108 10:31:28.988670       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1108 10:31:29.033565       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:31:29.036060       1 server_others.go:152] "Using iptables Proxier"
	I1108 10:31:29.036091       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 10:31:29.036099       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 10:31:29.036114       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 10:31:29.036322       1 server.go:846] "Version info" version="v1.28.0"
	I1108 10:31:29.036332       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:31:29.037248       1 config.go:188] "Starting service config controller"
	I1108 10:31:29.037271       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 10:31:29.037287       1 config.go:97] "Starting endpoint slice config controller"
	I1108 10:31:29.037291       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 10:31:29.037781       1 config.go:315] "Starting node config controller"
	I1108 10:31:29.037788       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 10:31:29.137321       1 shared_informer.go:318] Caches are synced for service config
	I1108 10:31:29.137421       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 10:31:29.137843       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [01c41bf566ad9600ca09ad3868596f897332bb4b5c574104a9588679c1e4aa50] <==
	W1108 10:31:12.636428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 10:31:12.636471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1108 10:31:12.636516       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 10:31:12.636542       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 10:31:12.636597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1108 10:31:12.636624       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1108 10:31:12.636809       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 10:31:12.636830       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 10:31:13.508360       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 10:31:13.508524       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 10:31:13.519417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 10:31:13.519467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 10:31:13.525025       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 10:31:13.525125       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 10:31:13.544610       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 10:31:13.544658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 10:31:13.577922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 10:31:13.578037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1108 10:31:13.675546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 10:31:13.676294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 10:31:13.747888       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1108 10:31:13.747922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1108 10:31:13.767380       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 10:31:13.767413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1108 10:31:15.517522       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 08 10:31:28 old-k8s-version-171136 kubelet[1385]: I1108 10:31:28.285552    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc043139-6bce-4061-a3c6-e733d1e90763-xtables-lock\") pod \"kindnet-bg4r4\" (UID: \"bc043139-6bce-4061-a3c6-e733d1e90763\") " pod="kube-system/kindnet-bg4r4"
	Nov 08 10:31:28 old-k8s-version-171136 kubelet[1385]: I1108 10:31:28.285639    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hkf4\" (UniqueName: \"kubernetes.io/projected/bc043139-6bce-4061-a3c6-e733d1e90763-kube-api-access-7hkf4\") pod \"kindnet-bg4r4\" (UID: \"bc043139-6bce-4061-a3c6-e733d1e90763\") " pod="kube-system/kindnet-bg4r4"
	Nov 08 10:31:28 old-k8s-version-171136 kubelet[1385]: I1108 10:31:28.285724    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/40f4282d-0202-4179-953a-3fd511afbaa5-kube-proxy\") pod \"kube-proxy-8ml4s\" (UID: \"40f4282d-0202-4179-953a-3fd511afbaa5\") " pod="kube-system/kube-proxy-8ml4s"
	Nov 08 10:31:28 old-k8s-version-171136 kubelet[1385]: I1108 10:31:28.285871    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjb8d\" (UniqueName: \"kubernetes.io/projected/40f4282d-0202-4179-953a-3fd511afbaa5-kube-api-access-bjb8d\") pod \"kube-proxy-8ml4s\" (UID: \"40f4282d-0202-4179-953a-3fd511afbaa5\") " pod="kube-system/kube-proxy-8ml4s"
	Nov 08 10:31:28 old-k8s-version-171136 kubelet[1385]: I1108 10:31:28.290424    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40f4282d-0202-4179-953a-3fd511afbaa5-xtables-lock\") pod \"kube-proxy-8ml4s\" (UID: \"40f4282d-0202-4179-953a-3fd511afbaa5\") " pod="kube-system/kube-proxy-8ml4s"
	Nov 08 10:31:28 old-k8s-version-171136 kubelet[1385]: I1108 10:31:28.290901    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc043139-6bce-4061-a3c6-e733d1e90763-lib-modules\") pod \"kindnet-bg4r4\" (UID: \"bc043139-6bce-4061-a3c6-e733d1e90763\") " pod="kube-system/kindnet-bg4r4"
	Nov 08 10:31:28 old-k8s-version-171136 kubelet[1385]: W1108 10:31:28.836348    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/crio-b694df94d6ec96879b7f9b76404e7fbc246c5aa2968aafa5c5ca564d8b6501c8 WatchSource:0}: Error finding container b694df94d6ec96879b7f9b76404e7fbc246c5aa2968aafa5c5ca564d8b6501c8: Status 404 returned error can't find the container with id b694df94d6ec96879b7f9b76404e7fbc246c5aa2968aafa5c5ca564d8b6501c8
	Nov 08 10:31:28 old-k8s-version-171136 kubelet[1385]: W1108 10:31:28.838115    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/crio-fdba95f902906a63d28c1c1d63a7b36c033b6eaa19042842df40ef941c8222ca WatchSource:0}: Error finding container fdba95f902906a63d28c1c1d63a7b36c033b6eaa19042842df40ef941c8222ca: Status 404 returned error can't find the container with id fdba95f902906a63d28c1c1d63a7b36c033b6eaa19042842df40ef941c8222ca
	Nov 08 10:31:32 old-k8s-version-171136 kubelet[1385]: I1108 10:31:32.166918    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8ml4s" podStartSLOduration=4.166873963 podCreationTimestamp="2025-11-08 10:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:31:29.164272533 +0000 UTC m=+13.401071513" watchObservedRunningTime="2025-11-08 10:31:32.166873963 +0000 UTC m=+16.403672943"
	Nov 08 10:31:36 old-k8s-version-171136 kubelet[1385]: I1108 10:31:36.058196    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-bg4r4" podStartSLOduration=5.336727193 podCreationTimestamp="2025-11-08 10:31:28 +0000 UTC" firstStartedPulling="2025-11-08 10:31:28.850072105 +0000 UTC m=+13.086871085" lastFinishedPulling="2025-11-08 10:31:31.57149816 +0000 UTC m=+15.808297139" observedRunningTime="2025-11-08 10:31:32.169004505 +0000 UTC m=+16.405803485" watchObservedRunningTime="2025-11-08 10:31:36.058153247 +0000 UTC m=+20.294952226"
	Nov 08 10:31:42 old-k8s-version-171136 kubelet[1385]: I1108 10:31:42.490304    1385 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 08 10:31:42 old-k8s-version-171136 kubelet[1385]: I1108 10:31:42.523916    1385 topology_manager.go:215] "Topology Admit Handler" podUID="66060f62-f048-459b-885f-8fa591cafed6" podNamespace="kube-system" podName="storage-provisioner"
	Nov 08 10:31:42 old-k8s-version-171136 kubelet[1385]: I1108 10:31:42.532301    1385 topology_manager.go:215] "Topology Admit Handler" podUID="08005efc-5866-444b-a834-f1b18d38717c" podNamespace="kube-system" podName="coredns-5dd5756b68-5m4ph"
	Nov 08 10:31:42 old-k8s-version-171136 kubelet[1385]: I1108 10:31:42.601349    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08005efc-5866-444b-a834-f1b18d38717c-config-volume\") pod \"coredns-5dd5756b68-5m4ph\" (UID: \"08005efc-5866-444b-a834-f1b18d38717c\") " pod="kube-system/coredns-5dd5756b68-5m4ph"
	Nov 08 10:31:42 old-k8s-version-171136 kubelet[1385]: I1108 10:31:42.601404    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzz2\" (UniqueName: \"kubernetes.io/projected/66060f62-f048-459b-885f-8fa591cafed6-kube-api-access-ndzz2\") pod \"storage-provisioner\" (UID: \"66060f62-f048-459b-885f-8fa591cafed6\") " pod="kube-system/storage-provisioner"
	Nov 08 10:31:42 old-k8s-version-171136 kubelet[1385]: I1108 10:31:42.601431    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slxcf\" (UniqueName: \"kubernetes.io/projected/08005efc-5866-444b-a834-f1b18d38717c-kube-api-access-slxcf\") pod \"coredns-5dd5756b68-5m4ph\" (UID: \"08005efc-5866-444b-a834-f1b18d38717c\") " pod="kube-system/coredns-5dd5756b68-5m4ph"
	Nov 08 10:31:42 old-k8s-version-171136 kubelet[1385]: I1108 10:31:42.601457    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/66060f62-f048-459b-885f-8fa591cafed6-tmp\") pod \"storage-provisioner\" (UID: \"66060f62-f048-459b-885f-8fa591cafed6\") " pod="kube-system/storage-provisioner"
	Nov 08 10:31:42 old-k8s-version-171136 kubelet[1385]: W1108 10:31:42.835884    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/crio-7fe924367a5a7dede84e38f7737edb698139f29fd1c8d41a7eb94d5fe6752408 WatchSource:0}: Error finding container 7fe924367a5a7dede84e38f7737edb698139f29fd1c8d41a7eb94d5fe6752408: Status 404 returned error can't find the container with id 7fe924367a5a7dede84e38f7737edb698139f29fd1c8d41a7eb94d5fe6752408
	Nov 08 10:31:42 old-k8s-version-171136 kubelet[1385]: W1108 10:31:42.874195    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/crio-af137f89d0711627e333b101e9037c27a0f86cd84cb8d9c5b7e3f82b858cb57a WatchSource:0}: Error finding container af137f89d0711627e333b101e9037c27a0f86cd84cb8d9c5b7e3f82b858cb57a: Status 404 returned error can't find the container with id af137f89d0711627e333b101e9037c27a0f86cd84cb8d9c5b7e3f82b858cb57a
	Nov 08 10:31:43 old-k8s-version-171136 kubelet[1385]: I1108 10:31:43.213586    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.213542923 podCreationTimestamp="2025-11-08 10:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:31:43.193894796 +0000 UTC m=+27.430693776" watchObservedRunningTime="2025-11-08 10:31:43.213542923 +0000 UTC m=+27.450341903"
	Nov 08 10:31:45 old-k8s-version-171136 kubelet[1385]: I1108 10:31:45.489504    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5m4ph" podStartSLOduration=17.489457916 podCreationTimestamp="2025-11-08 10:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:31:43.214495156 +0000 UTC m=+27.451294144" watchObservedRunningTime="2025-11-08 10:31:45.489457916 +0000 UTC m=+29.726257134"
	Nov 08 10:31:45 old-k8s-version-171136 kubelet[1385]: I1108 10:31:45.490424    1385 topology_manager.go:215] "Topology Admit Handler" podUID="bb27a248-1db0-4b58-a6df-586ba5fd017f" podNamespace="default" podName="busybox"
	Nov 08 10:31:45 old-k8s-version-171136 kubelet[1385]: I1108 10:31:45.529119    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq6pn\" (UniqueName: \"kubernetes.io/projected/bb27a248-1db0-4b58-a6df-586ba5fd017f-kube-api-access-wq6pn\") pod \"busybox\" (UID: \"bb27a248-1db0-4b58-a6df-586ba5fd017f\") " pod="default/busybox"
	Nov 08 10:31:45 old-k8s-version-171136 kubelet[1385]: W1108 10:31:45.812831    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/crio-a13b60517511652278d12733ee958061cb18e66a712ebd616acfe1b63a10c810 WatchSource:0}: Error finding container a13b60517511652278d12733ee958061cb18e66a712ebd616acfe1b63a10c810: Status 404 returned error can't find the container with id a13b60517511652278d12733ee958061cb18e66a712ebd616acfe1b63a10c810
	Nov 08 10:31:48 old-k8s-version-171136 kubelet[1385]: I1108 10:31:48.215718    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.101582885 podCreationTimestamp="2025-11-08 10:31:45 +0000 UTC" firstStartedPulling="2025-11-08 10:31:45.814917309 +0000 UTC m=+30.051716289" lastFinishedPulling="2025-11-08 10:31:47.929007087 +0000 UTC m=+32.165806075" observedRunningTime="2025-11-08 10:31:48.215583755 +0000 UTC m=+32.452382743" watchObservedRunningTime="2025-11-08 10:31:48.215672671 +0000 UTC m=+32.452471651"
	
	
	==> storage-provisioner [5371f9d047027af030ad8b2c91893308143ecb8bc5c3fbf017f58712755fbfe2] <==
	I1108 10:31:42.915599       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:31:42.933135       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:31:42.941175       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 10:31:42.966026       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:31:42.967961       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-171136_df157a0d-0d0d-49dc-888f-1cab6238ff0a!
	I1108 10:31:42.972650       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a450f0e-2def-442b-8030-194bd9a30378", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-171136_df157a0d-0d0d-49dc-888f-1cab6238ff0a became leader
	I1108 10:31:43.068895       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-171136_df157a0d-0d0d-49dc-888f-1cab6238ff0a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-171136 -n old-k8s-version-171136
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-171136 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-171136 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-171136 --alsologtostderr -v=1: exit status 80 (2.413755782s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-171136 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:33:10.804755 1211218 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:33:10.804985 1211218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:33:10.805018 1211218 out.go:374] Setting ErrFile to fd 2...
	I1108 10:33:10.805038 1211218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:33:10.805311 1211218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:33:10.805587 1211218 out.go:368] Setting JSON to false
	I1108 10:33:10.805647 1211218 mustload.go:66] Loading cluster: old-k8s-version-171136
	I1108 10:33:10.806071 1211218 config.go:182] Loaded profile config "old-k8s-version-171136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:33:10.806561 1211218 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:33:10.824203 1211218 host.go:66] Checking if "old-k8s-version-171136" exists ...
	I1108 10:33:10.824593 1211218 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:33:10.879637 1211218 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:33:10.869939994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:33:10.880388 1211218 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-171136 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:33:10.883879 1211218 out.go:179] * Pausing node old-k8s-version-171136 ... 
	I1108 10:33:10.886898 1211218 host.go:66] Checking if "old-k8s-version-171136" exists ...
	I1108 10:33:10.887245 1211218 ssh_runner.go:195] Run: systemctl --version
	I1108 10:33:10.887291 1211218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:33:10.903696 1211218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:33:11.013280 1211218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:33:11.034747 1211218 pause.go:52] kubelet running: true
	I1108 10:33:11.034818 1211218 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:33:11.292280 1211218 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:33:11.292395 1211218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:33:11.361712 1211218 cri.go:89] found id: "758b80ab804e552feb3f98e52fa667d161d85d3bab2614ec5c6efe8963ea3698"
	I1108 10:33:11.361762 1211218 cri.go:89] found id: "2b33161a0a491a5086c7c8ae7d045c0558f8c2fc886a2ba82e34c1b419eac34b"
	I1108 10:33:11.361768 1211218 cri.go:89] found id: "3c438edaaae97ac5fc21d3e9f7a5bfc1abf55d6f94c1d40caf872c0f88407309"
	I1108 10:33:11.361772 1211218 cri.go:89] found id: "a1ea9a35262a2ecf211dbe2bd4eb8aa0b383c6dde45b73c0eb91cf2e3d64d7d1"
	I1108 10:33:11.361776 1211218 cri.go:89] found id: "246af0d96cd99263d477cfcfde9cf5b96d4eb41bbf3703a2a45a5b4e53cc84de"
	I1108 10:33:11.361780 1211218 cri.go:89] found id: "db8c533fb06e8ef7402212f3c434623824a29c9cf817e134cf0d1695471f2609"
	I1108 10:33:11.361783 1211218 cri.go:89] found id: "6002b979fafdf69a44654d6dde5cc544aca07f7cc8a38cab91edafb52c08cd41"
	I1108 10:33:11.361786 1211218 cri.go:89] found id: "c0039ca9f9316f54572320f29d7cfdc22e2d6bf9c3d7f61d16d19d0dfce14965"
	I1108 10:33:11.361790 1211218 cri.go:89] found id: "86455d1631572d37d82679402ce9bf75876840bd25c547b5d518b6af7ce1c24d"
	I1108 10:33:11.361797 1211218 cri.go:89] found id: "b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e"
	I1108 10:33:11.361820 1211218 cri.go:89] found id: "25fa1ac9d4bca6c7f5c615c071f8779149b53aa42686a12af40926c011b98b71"
	I1108 10:33:11.361831 1211218 cri.go:89] found id: ""
	I1108 10:33:11.361911 1211218 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:33:11.381876 1211218 retry.go:31] will retry after 142.074582ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:33:11Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:33:11.524180 1211218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:33:11.537384 1211218 pause.go:52] kubelet running: false
	I1108 10:33:11.537504 1211218 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:33:11.706815 1211218 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:33:11.706907 1211218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:33:11.774435 1211218 cri.go:89] found id: "758b80ab804e552feb3f98e52fa667d161d85d3bab2614ec5c6efe8963ea3698"
	I1108 10:33:11.774509 1211218 cri.go:89] found id: "2b33161a0a491a5086c7c8ae7d045c0558f8c2fc886a2ba82e34c1b419eac34b"
	I1108 10:33:11.774528 1211218 cri.go:89] found id: "3c438edaaae97ac5fc21d3e9f7a5bfc1abf55d6f94c1d40caf872c0f88407309"
	I1108 10:33:11.774547 1211218 cri.go:89] found id: "a1ea9a35262a2ecf211dbe2bd4eb8aa0b383c6dde45b73c0eb91cf2e3d64d7d1"
	I1108 10:33:11.774578 1211218 cri.go:89] found id: "246af0d96cd99263d477cfcfde9cf5b96d4eb41bbf3703a2a45a5b4e53cc84de"
	I1108 10:33:11.774597 1211218 cri.go:89] found id: "db8c533fb06e8ef7402212f3c434623824a29c9cf817e134cf0d1695471f2609"
	I1108 10:33:11.774611 1211218 cri.go:89] found id: "6002b979fafdf69a44654d6dde5cc544aca07f7cc8a38cab91edafb52c08cd41"
	I1108 10:33:11.774625 1211218 cri.go:89] found id: "c0039ca9f9316f54572320f29d7cfdc22e2d6bf9c3d7f61d16d19d0dfce14965"
	I1108 10:33:11.774629 1211218 cri.go:89] found id: "86455d1631572d37d82679402ce9bf75876840bd25c547b5d518b6af7ce1c24d"
	I1108 10:33:11.774635 1211218 cri.go:89] found id: "b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e"
	I1108 10:33:11.774638 1211218 cri.go:89] found id: "25fa1ac9d4bca6c7f5c615c071f8779149b53aa42686a12af40926c011b98b71"
	I1108 10:33:11.774641 1211218 cri.go:89] found id: ""
	I1108 10:33:11.774688 1211218 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:33:11.787112 1211218 retry.go:31] will retry after 378.16119ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:33:11Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:33:12.165481 1211218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:33:12.184353 1211218 pause.go:52] kubelet running: false
	I1108 10:33:12.184496 1211218 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:33:12.345691 1211218 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:33:12.345814 1211218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:33:12.419917 1211218 cri.go:89] found id: "758b80ab804e552feb3f98e52fa667d161d85d3bab2614ec5c6efe8963ea3698"
	I1108 10:33:12.419943 1211218 cri.go:89] found id: "2b33161a0a491a5086c7c8ae7d045c0558f8c2fc886a2ba82e34c1b419eac34b"
	I1108 10:33:12.419949 1211218 cri.go:89] found id: "3c438edaaae97ac5fc21d3e9f7a5bfc1abf55d6f94c1d40caf872c0f88407309"
	I1108 10:33:12.419953 1211218 cri.go:89] found id: "a1ea9a35262a2ecf211dbe2bd4eb8aa0b383c6dde45b73c0eb91cf2e3d64d7d1"
	I1108 10:33:12.419957 1211218 cri.go:89] found id: "246af0d96cd99263d477cfcfde9cf5b96d4eb41bbf3703a2a45a5b4e53cc84de"
	I1108 10:33:12.419961 1211218 cri.go:89] found id: "db8c533fb06e8ef7402212f3c434623824a29c9cf817e134cf0d1695471f2609"
	I1108 10:33:12.419964 1211218 cri.go:89] found id: "6002b979fafdf69a44654d6dde5cc544aca07f7cc8a38cab91edafb52c08cd41"
	I1108 10:33:12.419967 1211218 cri.go:89] found id: "c0039ca9f9316f54572320f29d7cfdc22e2d6bf9c3d7f61d16d19d0dfce14965"
	I1108 10:33:12.420002 1211218 cri.go:89] found id: "86455d1631572d37d82679402ce9bf75876840bd25c547b5d518b6af7ce1c24d"
	I1108 10:33:12.420017 1211218 cri.go:89] found id: "b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e"
	I1108 10:33:12.420026 1211218 cri.go:89] found id: "25fa1ac9d4bca6c7f5c615c071f8779149b53aa42686a12af40926c011b98b71"
	I1108 10:33:12.420029 1211218 cri.go:89] found id: ""
	I1108 10:33:12.420099 1211218 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:33:12.431716 1211218 retry.go:31] will retry after 442.393264ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:33:12Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:33:12.875052 1211218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:33:12.888297 1211218 pause.go:52] kubelet running: false
	I1108 10:33:12.888418 1211218 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:33:13.054233 1211218 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:33:13.054370 1211218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:33:13.130538 1211218 cri.go:89] found id: "758b80ab804e552feb3f98e52fa667d161d85d3bab2614ec5c6efe8963ea3698"
	I1108 10:33:13.130610 1211218 cri.go:89] found id: "2b33161a0a491a5086c7c8ae7d045c0558f8c2fc886a2ba82e34c1b419eac34b"
	I1108 10:33:13.130631 1211218 cri.go:89] found id: "3c438edaaae97ac5fc21d3e9f7a5bfc1abf55d6f94c1d40caf872c0f88407309"
	I1108 10:33:13.130649 1211218 cri.go:89] found id: "a1ea9a35262a2ecf211dbe2bd4eb8aa0b383c6dde45b73c0eb91cf2e3d64d7d1"
	I1108 10:33:13.130684 1211218 cri.go:89] found id: "246af0d96cd99263d477cfcfde9cf5b96d4eb41bbf3703a2a45a5b4e53cc84de"
	I1108 10:33:13.130698 1211218 cri.go:89] found id: "db8c533fb06e8ef7402212f3c434623824a29c9cf817e134cf0d1695471f2609"
	I1108 10:33:13.130702 1211218 cri.go:89] found id: "6002b979fafdf69a44654d6dde5cc544aca07f7cc8a38cab91edafb52c08cd41"
	I1108 10:33:13.130705 1211218 cri.go:89] found id: "c0039ca9f9316f54572320f29d7cfdc22e2d6bf9c3d7f61d16d19d0dfce14965"
	I1108 10:33:13.130709 1211218 cri.go:89] found id: "86455d1631572d37d82679402ce9bf75876840bd25c547b5d518b6af7ce1c24d"
	I1108 10:33:13.130737 1211218 cri.go:89] found id: "b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e"
	I1108 10:33:13.130743 1211218 cri.go:89] found id: "25fa1ac9d4bca6c7f5c615c071f8779149b53aa42686a12af40926c011b98b71"
	I1108 10:33:13.130746 1211218 cri.go:89] found id: ""
	I1108 10:33:13.130807 1211218 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:33:13.146707 1211218 out.go:203] 
	W1108 10:33:13.149626 1211218 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:33:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:33:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:33:13.149657 1211218 out.go:285] * 
	* 
	W1108 10:33:13.158557 1211218 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:33:13.161723 1211218 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-171136 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-171136
helpers_test.go:243: (dbg) docker inspect old-k8s-version-171136:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d",
	        "Created": "2025-11-08T10:30:49.022889439Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1209125,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:32:08.741273141Z",
	            "FinishedAt": "2025-11-08T10:32:07.904545424Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/hosts",
	        "LogPath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d-json.log",
	        "Name": "/old-k8s-version-171136",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-171136:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-171136",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d",
	                "LowerDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-171136",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-171136/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-171136",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-171136",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-171136",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "183393353d13bc2f7402a4414fec9ceba21ee1c49c86570517763443eaeb522b",
	            "SandboxKey": "/var/run/docker/netns/183393353d13",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34512"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34513"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34516"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34514"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34515"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-171136": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:50:9a:35:75:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de4af9df12e1c8f538a1e008be00be15053361dbab11b5398b5ceb5166430671",
	                    "EndpointID": "49333c1233b638479849e812fc65bff81ae85c02fe07fc4e3060509059e4fcd5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-171136",
	                        "b7cf45de166d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-171136 -n old-k8s-version-171136
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-171136 -n old-k8s-version-171136: exit status 2 (393.997789ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-171136 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-171136 logs -n 25: (1.349593677s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-731120 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo containerd config dump                                                                                                                                                                                                  │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo crio config                                                                                                                                                                                                             │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ delete  │ -p cilium-731120                                                                                                                                                                                                                              │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p force-systemd-env-680693 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-680693  │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ delete  │ -p kubernetes-upgrade-666491                                                                                                                                                                                                                  │ kubernetes-upgrade-666491 │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-837698    │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p force-systemd-env-680693                                                                                                                                                                                                                   │ force-systemd-env-680693  │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p cert-options-517657 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:30 UTC │
	│ ssh     │ cert-options-517657 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ ssh     │ -p cert-options-517657 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-517657                                                                                                                                                                                                                        │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-171136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-171136 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │ 08 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-171136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ image   │ old-k8s-version-171136 image list --format=json                                                                                                                                                                                               │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-171136 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:32:08
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:32:08.444611 1208998 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:32:08.444756 1208998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:32:08.444767 1208998 out.go:374] Setting ErrFile to fd 2...
	I1108 10:32:08.444779 1208998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:32:08.445163 1208998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:32:08.445618 1208998 out.go:368] Setting JSON to false
	I1108 10:32:08.446672 1208998 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33274,"bootTime":1762564655,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:32:08.446779 1208998 start.go:143] virtualization:  
	I1108 10:32:08.449881 1208998 out.go:179] * [old-k8s-version-171136] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:32:08.453722 1208998 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:32:08.453807 1208998 notify.go:221] Checking for updates...
	I1108 10:32:08.459719 1208998 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:32:08.462687 1208998 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:32:08.465668 1208998 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:32:08.468575 1208998 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:32:08.471519 1208998 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:32:08.474949 1208998 config.go:182] Loaded profile config "old-k8s-version-171136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:32:08.478485 1208998 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1108 10:32:08.481468 1208998 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:32:08.517131 1208998 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:32:08.517311 1208998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:32:08.572147 1208998 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:32:08.562098072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:32:08.572258 1208998 docker.go:319] overlay module found
	I1108 10:32:08.575315 1208998 out.go:179] * Using the docker driver based on existing profile
	I1108 10:32:08.578148 1208998 start.go:309] selected driver: docker
	I1108 10:32:08.578173 1208998 start.go:930] validating driver "docker" against &{Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:32:08.578273 1208998 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:32:08.578949 1208998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:32:08.646272 1208998 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:32:08.637213806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:32:08.646664 1208998 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:32:08.646694 1208998 cni.go:84] Creating CNI manager for ""
	I1108 10:32:08.646751 1208998 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:32:08.646786 1208998 start.go:353] cluster config:
	{Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:32:08.651849 1208998 out.go:179] * Starting "old-k8s-version-171136" primary control-plane node in "old-k8s-version-171136" cluster
	I1108 10:32:08.654639 1208998 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:32:08.657569 1208998 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:32:08.660403 1208998 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:32:08.660498 1208998 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1108 10:32:08.660512 1208998 cache.go:59] Caching tarball of preloaded images
	I1108 10:32:08.660522 1208998 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:32:08.660592 1208998 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:32:08.660604 1208998 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1108 10:32:08.660714 1208998 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/config.json ...
	I1108 10:32:08.679145 1208998 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:32:08.679165 1208998 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:32:08.679178 1208998 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:32:08.679200 1208998 start.go:360] acquireMachinesLock for old-k8s-version-171136: {Name:mk3d8c83478e2975fc25a9dafdc0d687aa9eb7c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:32:08.679254 1208998 start.go:364] duration metric: took 35.904µs to acquireMachinesLock for "old-k8s-version-171136"
	I1108 10:32:08.679273 1208998 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:32:08.679279 1208998 fix.go:54] fixHost starting: 
	I1108 10:32:08.679561 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:08.705454 1208998 fix.go:112] recreateIfNeeded on old-k8s-version-171136: state=Stopped err=<nil>
	W1108 10:32:08.705494 1208998 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 10:32:08.708557 1208998 out.go:252] * Restarting existing docker container for "old-k8s-version-171136" ...
	I1108 10:32:08.708645 1208998 cli_runner.go:164] Run: docker start old-k8s-version-171136
	I1108 10:32:08.967510 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:08.996909 1208998 kic.go:430] container "old-k8s-version-171136" state is running.
	I1108 10:32:08.997278 1208998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-171136
	I1108 10:32:09.018092 1208998 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/config.json ...
	I1108 10:32:09.018335 1208998 machine.go:94] provisionDockerMachine start ...
	I1108 10:32:09.018404 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:09.040777 1208998 main.go:143] libmachine: Using SSH client type: native
	I1108 10:32:09.041180 1208998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1108 10:32:09.041199 1208998 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:32:09.042011 1208998 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:32:12.196029 1208998 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171136
	
	I1108 10:32:12.196053 1208998 ubuntu.go:182] provisioning hostname "old-k8s-version-171136"
	I1108 10:32:12.196125 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:12.213647 1208998 main.go:143] libmachine: Using SSH client type: native
	I1108 10:32:12.214009 1208998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1108 10:32:12.214028 1208998 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171136 && echo "old-k8s-version-171136" | sudo tee /etc/hostname
	I1108 10:32:12.389685 1208998 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171136
	
	I1108 10:32:12.389774 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:12.407588 1208998 main.go:143] libmachine: Using SSH client type: native
	I1108 10:32:12.407922 1208998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1108 10:32:12.407940 1208998 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171136/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:32:12.560891 1208998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:32:12.560958 1208998 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:32:12.560995 1208998 ubuntu.go:190] setting up certificates
	I1108 10:32:12.561036 1208998 provision.go:84] configureAuth start
	I1108 10:32:12.561119 1208998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-171136
	I1108 10:32:12.578280 1208998 provision.go:143] copyHostCerts
	I1108 10:32:12.578347 1208998 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:32:12.578364 1208998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:32:12.578441 1208998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:32:12.578544 1208998 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:32:12.578549 1208998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:32:12.578573 1208998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:32:12.578636 1208998 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:32:12.578641 1208998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:32:12.578666 1208998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:32:12.578721 1208998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171136 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-171136]
	I1108 10:32:13.013655 1208998 provision.go:177] copyRemoteCerts
	I1108 10:32:13.013747 1208998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:32:13.013823 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.031060 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:13.136199 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:32:13.153520 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1108 10:32:13.178828 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:32:13.196782 1208998 provision.go:87] duration metric: took 635.692609ms to configureAuth
	I1108 10:32:13.196807 1208998 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:32:13.197047 1208998 config.go:182] Loaded profile config "old-k8s-version-171136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:32:13.197148 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.213905 1208998 main.go:143] libmachine: Using SSH client type: native
	I1108 10:32:13.214213 1208998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1108 10:32:13.214235 1208998 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:32:13.533570 1208998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:32:13.533598 1208998 machine.go:97] duration metric: took 4.515251516s to provisionDockerMachine
	I1108 10:32:13.533609 1208998 start.go:293] postStartSetup for "old-k8s-version-171136" (driver="docker")
	I1108 10:32:13.533657 1208998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:32:13.533760 1208998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:32:13.533835 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.553388 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:13.664224 1208998 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:32:13.667452 1208998 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:32:13.667491 1208998 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:32:13.667503 1208998 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:32:13.667558 1208998 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:32:13.667645 1208998 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:32:13.667755 1208998 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:32:13.675217 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:32:13.692545 1208998 start.go:296] duration metric: took 158.883747ms for postStartSetup
	I1108 10:32:13.692686 1208998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:32:13.692752 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.712046 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:13.813579 1208998 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:32:13.818438 1208998 fix.go:56] duration metric: took 5.139151846s for fixHost
	I1108 10:32:13.818464 1208998 start.go:83] releasing machines lock for "old-k8s-version-171136", held for 5.139201741s
	I1108 10:32:13.818541 1208998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-171136
	I1108 10:32:13.835617 1208998 ssh_runner.go:195] Run: cat /version.json
	I1108 10:32:13.835677 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.835956 1208998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:32:13.836036 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.857822 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:13.870080 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:13.960105 1208998 ssh_runner.go:195] Run: systemctl --version
	I1108 10:32:14.054386 1208998 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:32:14.092829 1208998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:32:14.097796 1208998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:32:14.097875 1208998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:32:14.106901 1208998 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:32:14.106930 1208998 start.go:496] detecting cgroup driver to use...
	I1108 10:32:14.106971 1208998 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:32:14.107022 1208998 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:32:14.123025 1208998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:32:14.136379 1208998 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:32:14.136507 1208998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:32:14.151953 1208998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:32:14.166342 1208998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:32:14.289770 1208998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:32:14.419099 1208998 docker.go:234] disabling docker service ...
	I1108 10:32:14.419221 1208998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:32:14.435833 1208998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:32:14.449180 1208998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:32:14.574518 1208998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:32:14.697250 1208998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:32:14.710717 1208998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:32:14.727261 1208998 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 10:32:14.727324 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.738350 1208998 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:32:14.738421 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.748617 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.758165 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.767131 1208998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:32:14.775665 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.785154 1208998 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.793835 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.805037 1208998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:32:14.812353 1208998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:32:14.819842 1208998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:32:14.941742 1208998 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:32:15.099236 1208998 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:32:15.099317 1208998 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:32:15.103539 1208998 start.go:564] Will wait 60s for crictl version
	I1108 10:32:15.103608 1208998 ssh_runner.go:195] Run: which crictl
	I1108 10:32:15.107665 1208998 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:32:15.134134 1208998 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:32:15.134223 1208998 ssh_runner.go:195] Run: crio --version
	I1108 10:32:15.164633 1208998 ssh_runner.go:195] Run: crio --version
	I1108 10:32:15.203652 1208998 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1108 10:32:15.206547 1208998 cli_runner.go:164] Run: docker network inspect old-k8s-version-171136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:32:15.222865 1208998 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:32:15.226922 1208998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:32:15.237112 1208998 kubeadm.go:884] updating cluster {Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:32:15.237225 1208998 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:32:15.237279 1208998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:32:15.276060 1208998 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:32:15.276081 1208998 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:32:15.276138 1208998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:32:15.303971 1208998 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:32:15.303996 1208998 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:32:15.304005 1208998 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1108 10:32:15.304106 1208998 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-171136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:32:15.304187 1208998 ssh_runner.go:195] Run: crio config
	I1108 10:32:15.363353 1208998 cni.go:84] Creating CNI manager for ""
	I1108 10:32:15.363378 1208998 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:32:15.363396 1208998 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:32:15.363420 1208998 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-171136 NodeName:old-k8s-version-171136 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:32:15.363567 1208998 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-171136"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:32:15.363642 1208998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1108 10:32:15.371404 1208998 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:32:15.371493 1208998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:32:15.379036 1208998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1108 10:32:15.391671 1208998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:32:15.404700 1208998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1108 10:32:15.418870 1208998 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:32:15.423099 1208998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:32:15.433431 1208998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:32:15.563019 1208998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:32:15.580661 1208998 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136 for IP: 192.168.85.2
	I1108 10:32:15.580726 1208998 certs.go:195] generating shared ca certs ...
	I1108 10:32:15.580760 1208998 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:32:15.580950 1208998 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:32:15.581024 1208998 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:32:15.581060 1208998 certs.go:257] generating profile certs ...
	I1108 10:32:15.581201 1208998 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.key
	I1108 10:32:15.581325 1208998 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.key.3f7b60cf
	I1108 10:32:15.581389 1208998 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.key
	I1108 10:32:15.581542 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:32:15.581610 1208998 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:32:15.581648 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:32:15.581702 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:32:15.581760 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:32:15.581806 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:32:15.581890 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:32:15.582578 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:32:15.604559 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:32:15.625410 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:32:15.646653 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:32:15.667793 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1108 10:32:15.693485 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:32:15.714394 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:32:15.737502 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:32:15.770371 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:32:15.797233 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:32:15.822749 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:32:15.843773 1208998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:32:15.858999 1208998 ssh_runner.go:195] Run: openssl version
	I1108 10:32:15.867116 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:32:15.877530 1208998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:32:15.881547 1208998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:32:15.881626 1208998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:32:15.930110 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:32:15.938867 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:32:15.947566 1208998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:32:15.951364 1208998 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:32:15.951472 1208998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:32:15.997419 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:32:16.011872 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:32:16.020901 1208998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:32:16.032833 1208998 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:32:16.032921 1208998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:32:16.074871 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:32:16.082996 1208998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:32:16.086888 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:32:16.128302 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:32:16.169901 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:32:16.236567 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:32:16.297561 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:32:16.381127 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:32:16.474342 1208998 kubeadm.go:401] StartCluster: {Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:32:16.474428 1208998 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:32:16.474492 1208998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:32:16.520504 1208998 cri.go:89] found id: "db8c533fb06e8ef7402212f3c434623824a29c9cf817e134cf0d1695471f2609"
	I1108 10:32:16.520528 1208998 cri.go:89] found id: "6002b979fafdf69a44654d6dde5cc544aca07f7cc8a38cab91edafb52c08cd41"
	I1108 10:32:16.520534 1208998 cri.go:89] found id: "c0039ca9f9316f54572320f29d7cfdc22e2d6bf9c3d7f61d16d19d0dfce14965"
	I1108 10:32:16.520543 1208998 cri.go:89] found id: "86455d1631572d37d82679402ce9bf75876840bd25c547b5d518b6af7ce1c24d"
	I1108 10:32:16.520547 1208998 cri.go:89] found id: ""
	I1108 10:32:16.520596 1208998 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:32:16.538075 1208998 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:32:16Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:32:16.538144 1208998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:32:16.548302 1208998 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:32:16.548324 1208998 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:32:16.548374 1208998 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:32:16.559534 1208998 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:32:16.560085 1208998 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-171136" does not appear in /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:32:16.560357 1208998 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-1027379/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-171136" cluster setting kubeconfig missing "old-k8s-version-171136" context setting]
	I1108 10:32:16.560876 1208998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:32:16.562427 1208998 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:32:16.570292 1208998 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:32:16.570370 1208998 kubeadm.go:602] duration metric: took 22.038941ms to restartPrimaryControlPlane
	I1108 10:32:16.570395 1208998 kubeadm.go:403] duration metric: took 96.063127ms to StartCluster
	I1108 10:32:16.570444 1208998 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:32:16.570539 1208998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:32:16.571566 1208998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:32:16.571866 1208998 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:32:16.572307 1208998 config.go:182] Loaded profile config "old-k8s-version-171136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:32:16.572381 1208998 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:32:16.572578 1208998 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-171136"
	I1108 10:32:16.572592 1208998 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-171136"
	W1108 10:32:16.572598 1208998 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:32:16.572624 1208998 host.go:66] Checking if "old-k8s-version-171136" exists ...
	I1108 10:32:16.572645 1208998 addons.go:70] Setting dashboard=true in profile "old-k8s-version-171136"
	I1108 10:32:16.572660 1208998 addons.go:239] Setting addon dashboard=true in "old-k8s-version-171136"
	W1108 10:32:16.572666 1208998 addons.go:248] addon dashboard should already be in state true
	I1108 10:32:16.572686 1208998 host.go:66] Checking if "old-k8s-version-171136" exists ...
	I1108 10:32:16.573088 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:16.573308 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:16.573676 1208998 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-171136"
	I1108 10:32:16.573693 1208998 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-171136"
	I1108 10:32:16.573970 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:16.584746 1208998 out.go:179] * Verifying Kubernetes components...
	I1108 10:32:16.591402 1208998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:32:16.632315 1208998 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-171136"
	W1108 10:32:16.632348 1208998 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:32:16.632374 1208998 host.go:66] Checking if "old-k8s-version-171136" exists ...
	I1108 10:32:16.632854 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:16.640431 1208998 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:32:16.643666 1208998 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:32:16.643734 1208998 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:32:16.643749 1208998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:32:16.643816 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:16.656227 1208998 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:32:16.659570 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:32:16.659597 1208998 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:32:16.659669 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:16.687620 1208998 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:32:16.687643 1208998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:32:16.687706 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:16.720839 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:16.728571 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:16.745019 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:16.900282 1208998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:32:16.940338 1208998 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-171136" to be "Ready" ...
	I1108 10:32:16.974246 1208998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:32:16.976996 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:32:16.977018 1208998 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:32:17.037892 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:32:17.037961 1208998 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:32:17.086066 1208998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:32:17.107120 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:32:17.107193 1208998 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:32:17.184956 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:32:17.185020 1208998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:32:17.238242 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:32:17.238317 1208998 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:32:17.317098 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:32:17.317172 1208998 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:32:17.339544 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:32:17.339617 1208998 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:32:17.358394 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:32:17.358473 1208998 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:32:17.376545 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:32:17.376607 1208998 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:32:17.389589 1208998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:32:21.218701 1208998 node_ready.go:49] node "old-k8s-version-171136" is "Ready"
	I1108 10:32:21.218779 1208998 node_ready.go:38] duration metric: took 4.278358595s for node "old-k8s-version-171136" to be "Ready" ...
	I1108 10:32:21.218828 1208998 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:32:21.218936 1208998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:32:22.884078 1208998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.909798403s)
	I1108 10:32:22.884140 1208998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.798004256s)
	I1108 10:32:23.410432 1208998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.020753033s)
	I1108 10:32:23.410655 1208998 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.191688533s)
	I1108 10:32:23.410676 1208998 api_server.go:72] duration metric: took 6.838754901s to wait for apiserver process to appear ...
	I1108 10:32:23.410696 1208998 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:32:23.410726 1208998 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:32:23.413738 1208998 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-171136 addons enable metrics-server
	
	I1108 10:32:23.416744 1208998 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1108 10:32:23.419735 1208998 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:32:23.419972 1208998 addons.go:515] duration metric: took 6.847587602s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1108 10:32:23.421308 1208998 api_server.go:141] control plane version: v1.28.0
	I1108 10:32:23.421333 1208998 api_server.go:131] duration metric: took 10.622437ms to wait for apiserver health ...
	I1108 10:32:23.421346 1208998 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:32:23.425935 1208998 system_pods.go:59] 8 kube-system pods found
	I1108 10:32:23.425981 1208998 system_pods.go:61] "coredns-5dd5756b68-5m4ph" [08005efc-5866-444b-a834-f1b18d38717c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:32:23.425991 1208998 system_pods.go:61] "etcd-old-k8s-version-171136" [0bf47fe6-f4be-4f1e-adb6-9e157b6b92da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:32:23.425997 1208998 system_pods.go:61] "kindnet-bg4r4" [bc043139-6bce-4061-a3c6-e733d1e90763] Running
	I1108 10:32:23.426005 1208998 system_pods.go:61] "kube-apiserver-old-k8s-version-171136" [05958bfe-f331-4b7b-a251-b6888cb928af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:32:23.426017 1208998 system_pods.go:61] "kube-controller-manager-old-k8s-version-171136" [6e7e2c08-dad2-46e2-a419-96803b5758c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:32:23.426025 1208998 system_pods.go:61] "kube-proxy-8ml4s" [40f4282d-0202-4179-953a-3fd511afbaa5] Running
	I1108 10:32:23.426032 1208998 system_pods.go:61] "kube-scheduler-old-k8s-version-171136" [dcbcba65-c6f8-45cd-a9fa-af29cd3b4ab6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:32:23.426041 1208998 system_pods.go:61] "storage-provisioner" [66060f62-f048-459b-885f-8fa591cafed6] Running
	I1108 10:32:23.426047 1208998 system_pods.go:74] duration metric: took 4.694918ms to wait for pod list to return data ...
	I1108 10:32:23.426055 1208998 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:32:23.428760 1208998 default_sa.go:45] found service account: "default"
	I1108 10:32:23.428787 1208998 default_sa.go:55] duration metric: took 2.72131ms for default service account to be created ...
	I1108 10:32:23.428797 1208998 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:32:23.432583 1208998 system_pods.go:86] 8 kube-system pods found
	I1108 10:32:23.432625 1208998 system_pods.go:89] "coredns-5dd5756b68-5m4ph" [08005efc-5866-444b-a834-f1b18d38717c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:32:23.432637 1208998 system_pods.go:89] "etcd-old-k8s-version-171136" [0bf47fe6-f4be-4f1e-adb6-9e157b6b92da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:32:23.432642 1208998 system_pods.go:89] "kindnet-bg4r4" [bc043139-6bce-4061-a3c6-e733d1e90763] Running
	I1108 10:32:23.432650 1208998 system_pods.go:89] "kube-apiserver-old-k8s-version-171136" [05958bfe-f331-4b7b-a251-b6888cb928af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:32:23.432661 1208998 system_pods.go:89] "kube-controller-manager-old-k8s-version-171136" [6e7e2c08-dad2-46e2-a419-96803b5758c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:32:23.432666 1208998 system_pods.go:89] "kube-proxy-8ml4s" [40f4282d-0202-4179-953a-3fd511afbaa5] Running
	I1108 10:32:23.432673 1208998 system_pods.go:89] "kube-scheduler-old-k8s-version-171136" [dcbcba65-c6f8-45cd-a9fa-af29cd3b4ab6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:32:23.432679 1208998 system_pods.go:89] "storage-provisioner" [66060f62-f048-459b-885f-8fa591cafed6] Running
	I1108 10:32:23.432687 1208998 system_pods.go:126] duration metric: took 3.883973ms to wait for k8s-apps to be running ...
	I1108 10:32:23.432700 1208998 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:32:23.432759 1208998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:32:23.448367 1208998 system_svc.go:56] duration metric: took 15.657258ms WaitForService to wait for kubelet
	I1108 10:32:23.448396 1208998 kubeadm.go:587] duration metric: took 6.876472713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:32:23.448415 1208998 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:32:23.451801 1208998 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:32:23.451831 1208998 node_conditions.go:123] node cpu capacity is 2
	I1108 10:32:23.451844 1208998 node_conditions.go:105] duration metric: took 3.423187ms to run NodePressure ...
	I1108 10:32:23.451856 1208998 start.go:242] waiting for startup goroutines ...
	I1108 10:32:23.451864 1208998 start.go:247] waiting for cluster config update ...
	I1108 10:32:23.451875 1208998 start.go:256] writing updated cluster config ...
	I1108 10:32:23.452167 1208998 ssh_runner.go:195] Run: rm -f paused
	I1108 10:32:23.456165 1208998 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:32:23.460883 1208998 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5m4ph" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:32:25.467136 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:27.467775 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:29.967342 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:32.467531 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:34.967426 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:36.967708 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:38.967933 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:41.468726 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:43.967233 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:45.967287 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:48.467645 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:50.967095 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:52.971807 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:55.467201 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	I1108 10:32:57.476293 1208998 pod_ready.go:94] pod "coredns-5dd5756b68-5m4ph" is "Ready"
	I1108 10:32:57.476325 1208998 pod_ready.go:86] duration metric: took 34.01541659s for pod "coredns-5dd5756b68-5m4ph" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.481084 1208998 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.489176 1208998 pod_ready.go:94] pod "etcd-old-k8s-version-171136" is "Ready"
	I1108 10:32:57.489209 1208998 pod_ready.go:86] duration metric: took 8.095872ms for pod "etcd-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.493477 1208998 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.499614 1208998 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-171136" is "Ready"
	I1108 10:32:57.499645 1208998 pod_ready.go:86] duration metric: took 6.144006ms for pod "kube-apiserver-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.502872 1208998 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.665000 1208998 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-171136" is "Ready"
	I1108 10:32:57.665026 1208998 pod_ready.go:86] duration metric: took 162.123584ms for pod "kube-controller-manager-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.864627 1208998 pod_ready.go:83] waiting for pod "kube-proxy-8ml4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:58.265376 1208998 pod_ready.go:94] pod "kube-proxy-8ml4s" is "Ready"
	I1108 10:32:58.265402 1208998 pod_ready.go:86] duration metric: took 400.748994ms for pod "kube-proxy-8ml4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:58.465310 1208998 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:58.864588 1208998 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-171136" is "Ready"
	I1108 10:32:58.864616 1208998 pod_ready.go:86] duration metric: took 399.270935ms for pod "kube-scheduler-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:58.864630 1208998 pod_ready.go:40] duration metric: took 35.408432576s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:32:58.920086 1208998 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1108 10:32:58.923025 1208998 out.go:203] 
	W1108 10:32:58.925958 1208998 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 10:32:58.928726 1208998 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 10:32:58.931656 1208998 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-171136" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.776847036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.7837676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.784565582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.800565931Z" level=info msg="Created container b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d/dashboard-metrics-scraper" id=eb1adab6-b6d6-4693-a0d5-ee038fc88813 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.80346541Z" level=info msg="Starting container: b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e" id=88d70a1b-163a-4a7c-8e32-0a53f81e3af6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.805609245Z" level=info msg="Started container" PID=1659 containerID=b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d/dashboard-metrics-scraper id=88d70a1b-163a-4a7c-8e32-0a53f81e3af6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d24aefa7f5a60f89d6773e30de00babd8c99889d518a972d938676b92ca1010e
	Nov 08 10:33:00 old-k8s-version-171136 conmon[1657]: conmon b41545e6757e9358ceca <ninfo>: container 1659 exited with status 1
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.955831327Z" level=info msg="Removing container: f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236" id=9d190a39-ce6b-429f-a2ac-7e78d9035f03 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.965885932Z" level=info msg="Error loading conmon cgroup of container f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236: cgroup deleted" id=9d190a39-ce6b-429f-a2ac-7e78d9035f03 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.969443647Z" level=info msg="Removed container f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d/dashboard-metrics-scraper" id=9d190a39-ce6b-429f-a2ac-7e78d9035f03 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.73275161Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.738431065Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.738477693Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.738500093Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.741933365Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.74196913Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.741992587Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.745305076Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.745336697Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.745358145Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.749177276Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.749217422Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.749242783Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.752283083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.752314942Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	b41545e6757e9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   d24aefa7f5a60       dashboard-metrics-scraper-5f989dc9cf-45n9d       kubernetes-dashboard
	758b80ab804e5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   d30c51f2c4440       storage-provisioner                              kube-system
	25fa1ac9d4bca       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   2141302349a26       kubernetes-dashboard-8694d4445c-k8zsb            kubernetes-dashboard
	2b33161a0a491       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   eab19e7913c48       coredns-5dd5756b68-5m4ph                         kube-system
	ff8ff4eb956b1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   4efd7a751e50c       busybox                                          default
	3c438edaaae97       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   b537f4843472b       kindnet-bg4r4                                    kube-system
	a1ea9a35262a2       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           52 seconds ago      Running             kube-proxy                  1                   f8a96b4184a68       kube-proxy-8ml4s                                 kube-system
	246af0d96cd99       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   d30c51f2c4440       storage-provisioner                              kube-system
	db8c533fb06e8       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           57 seconds ago      Running             etcd                        1                   6729f05e8a503       etcd-old-k8s-version-171136                      kube-system
	6002b979fafdf       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           57 seconds ago      Running             kube-scheduler              1                   80631831d0484       kube-scheduler-old-k8s-version-171136            kube-system
	c0039ca9f9316       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           57 seconds ago      Running             kube-controller-manager     1                   59aa150768ca3       kube-controller-manager-old-k8s-version-171136   kube-system
	86455d1631572       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           57 seconds ago      Running             kube-apiserver              1                   b8a922c15aa93       kube-apiserver-old-k8s-version-171136            kube-system
	
	
	==> coredns [2b33161a0a491a5086c7c8ae7d045c0558f8c2fc886a2ba82e34c1b419eac34b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40102 - 44691 "HINFO IN 6628585141341047097.5452165239705701379. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012847631s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-171136
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-171136
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=old-k8s-version-171136
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_31_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:31:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-171136
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:33:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:32:51 +0000   Sat, 08 Nov 2025 10:31:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:32:51 +0000   Sat, 08 Nov 2025 10:31:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:32:51 +0000   Sat, 08 Nov 2025 10:31:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:32:51 +0000   Sat, 08 Nov 2025 10:31:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-171136
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                abac0900-0998-47c3-b513-18b6d2fce4e7
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-5m4ph                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-old-k8s-version-171136                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-bg4r4                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-171136             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-171136    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-8ml4s                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-171136             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-45n9d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-k8zsb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-171136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node old-k8s-version-171136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-171136 event: Registered Node old-k8s-version-171136 in Controller
	  Normal  NodeReady                92s                  kubelet          Node old-k8s-version-171136 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node old-k8s-version-171136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-171136 event: Registered Node old-k8s-version-171136 in Controller
	
	
	==> dmesg <==
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[ +18.424643] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [db8c533fb06e8ef7402212f3c434623824a29c9cf817e134cf0d1695471f2609] <==
	{"level":"info","ts":"2025-11-08T10:32:16.912889Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T10:32:16.912924Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T10:32:16.913158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-08T10:32:16.913248Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-08T10:32:16.913375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:32:16.913429Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:32:16.916754Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-08T10:32:16.920679Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-08T10:32:16.916899Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:32:16.921158Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-08T10:32:16.921254Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:32:18.71769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-08T10:32:18.717736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-08T10:32:18.717769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-08T10:32:18.717788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-08T10:32:18.717795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-08T10:32:18.717805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-08T10:32:18.717823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-08T10:32:18.7218Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-171136 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T10:32:18.721847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:32:18.722839Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-08T10:32:18.723008Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:32:18.723871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-08T10:32:18.727812Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T10:32:18.727854Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:33:14 up  9:15,  0 user,  load average: 2.24, 3.14, 2.72
	Linux old-k8s-version-171136 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c438edaaae97ac5fc21d3e9f7a5bfc1abf55d6f94c1d40caf872c0f88407309] <==
	I1108 10:32:22.528158       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:32:22.612796       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:32:22.612934       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:32:22.612947       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:32:22.612961       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:32:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:32:22.733491       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:32:22.733519       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:32:22.733528       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:32:22.733630       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:32:52.733108       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:32:52.733111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:32:52.733361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:32:52.733477       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:32:54.334555       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:32:54.334587       1 metrics.go:72] Registering metrics
	I1108 10:32:54.334665       1 controller.go:711] "Syncing nftables rules"
	I1108 10:33:02.732403       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:33:02.732501       1 main.go:301] handling current node
	I1108 10:33:12.738135       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:33:12.738168       1 main.go:301] handling current node
	
	
	==> kube-apiserver [86455d1631572d37d82679402ce9bf75876840bd25c547b5d518b6af7ce1c24d] <==
	I1108 10:32:21.256021       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:32:21.284070       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:32:21.292674       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 10:32:21.292821       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 10:32:21.294034       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 10:32:21.294835       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 10:32:21.294510       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 10:32:21.294613       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 10:32:21.296943       1 aggregator.go:166] initial CRD sync complete...
	I1108 10:32:21.296989       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 10:32:21.297018       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:32:21.297046       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:32:21.339450       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1108 10:32:21.378628       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:32:21.996946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:32:23.152388       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 10:32:23.217050       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 10:32:23.258041       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:32:23.282508       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:32:23.295528       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 10:32:23.379247       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.149.186"}
	I1108 10:32:23.402818       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.215.210"}
	I1108 10:32:33.695505       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:32:33.701940       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1108 10:32:33.738167       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c0039ca9f9316f54572320f29d7cfdc22e2d6bf9c3d7f61d16d19d0dfce14965] <==
	I1108 10:32:33.765660       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-45n9d"
	I1108 10:32:33.783580       1 shared_informer.go:318] Caches are synced for cronjob
	I1108 10:32:33.800857       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 10:32:33.801130       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-k8zsb"
	I1108 10:32:33.822988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.392245ms"
	I1108 10:32:33.834774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="104.46423ms"
	I1108 10:32:33.840527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="17.417254ms"
	I1108 10:32:33.840734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.288µs"
	I1108 10:32:33.850042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.011µs"
	I1108 10:32:33.855816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.643894ms"
	I1108 10:32:33.855998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="70.381µs"
	I1108 10:32:33.874369       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 10:32:33.881591       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.642µs"
	I1108 10:32:34.213404       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:32:34.213453       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 10:32:34.227584       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:32:39.898872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.851µs"
	I1108 10:32:40.915970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.934µs"
	I1108 10:32:41.915988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.895µs"
	I1108 10:32:44.936854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.670144ms"
	I1108 10:32:44.937147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.604µs"
	I1108 10:32:57.467710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.508364ms"
	I1108 10:32:57.469535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.306µs"
	I1108 10:33:00.965462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.325µs"
	I1108 10:33:04.998996       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.464µs"
	
	
	==> kube-proxy [a1ea9a35262a2ecf211dbe2bd4eb8aa0b383c6dde45b73c0eb91cf2e3d64d7d1] <==
	I1108 10:32:22.537749       1 server_others.go:69] "Using iptables proxy"
	I1108 10:32:22.575734       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1108 10:32:22.614390       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:32:22.632340       1 server_others.go:152] "Using iptables Proxier"
	I1108 10:32:22.632378       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 10:32:22.632387       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 10:32:22.632414       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 10:32:22.632872       1 server.go:846] "Version info" version="v1.28.0"
	I1108 10:32:22.632886       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:32:22.634129       1 config.go:188] "Starting service config controller"
	I1108 10:32:22.634200       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 10:32:22.634243       1 config.go:97] "Starting endpoint slice config controller"
	I1108 10:32:22.634285       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 10:32:22.634713       1 config.go:315] "Starting node config controller"
	I1108 10:32:22.634766       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 10:32:22.734756       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 10:32:22.734814       1 shared_informer.go:318] Caches are synced for service config
	I1108 10:32:22.735053       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6002b979fafdf69a44654d6dde5cc544aca07f7cc8a38cab91edafb52c08cd41] <==
	I1108 10:32:19.858223       1 serving.go:348] Generated self-signed cert in-memory
	I1108 10:32:21.568088       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1108 10:32:21.568234       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:32:21.573154       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1108 10:32:21.573361       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1108 10:32:21.573405       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1108 10:32:21.573448       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 10:32:21.581071       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:32:21.586697       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 10:32:21.585959       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:32:21.587099       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 10:32:21.677515       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1108 10:32:21.688560       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 10:32:21.688636       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Nov 08 10:32:33 old-k8s-version-171136 kubelet[780]: I1108 10:32:33.908256     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-45n9d\" (UID: \"4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d"
	Nov 08 10:32:33 old-k8s-version-171136 kubelet[780]: I1108 10:32:33.908349     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hxcn\" (UniqueName: \"kubernetes.io/projected/4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a-kube-api-access-5hxcn\") pod \"dashboard-metrics-scraper-5f989dc9cf-45n9d\" (UID: \"4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d"
	Nov 08 10:32:33 old-k8s-version-171136 kubelet[780]: I1108 10:32:33.908430     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdl4c\" (UniqueName: \"kubernetes.io/projected/16871c16-e616-4ff3-8dfa-809dcd2a3b26-kube-api-access-jdl4c\") pod \"kubernetes-dashboard-8694d4445c-k8zsb\" (UID: \"16871c16-e616-4ff3-8dfa-809dcd2a3b26\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-k8zsb"
	Nov 08 10:32:35 old-k8s-version-171136 kubelet[780]: W1108 10:32:35.031098     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/crio-d24aefa7f5a60f89d6773e30de00babd8c99889d518a972d938676b92ca1010e WatchSource:0}: Error finding container d24aefa7f5a60f89d6773e30de00babd8c99889d518a972d938676b92ca1010e: Status 404 returned error can't find the container with id d24aefa7f5a60f89d6773e30de00babd8c99889d518a972d938676b92ca1010e
	Nov 08 10:32:35 old-k8s-version-171136 kubelet[780]: W1108 10:32:35.051811     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/crio-2141302349a2634516545d12f0fcfb374c9774b6b76c6ca12487ddf553d7f7fc WatchSource:0}: Error finding container 2141302349a2634516545d12f0fcfb374c9774b6b76c6ca12487ddf553d7f7fc: Status 404 returned error can't find the container with id 2141302349a2634516545d12f0fcfb374c9774b6b76c6ca12487ddf553d7f7fc
	Nov 08 10:32:39 old-k8s-version-171136 kubelet[780]: I1108 10:32:39.884574     780 scope.go:117] "RemoveContainer" containerID="710bb8b0c7ed5282688dc29dcefa2a227372d1d60f90cde424238c462ebf6bc9"
	Nov 08 10:32:40 old-k8s-version-171136 kubelet[780]: I1108 10:32:40.892739     780 scope.go:117] "RemoveContainer" containerID="710bb8b0c7ed5282688dc29dcefa2a227372d1d60f90cde424238c462ebf6bc9"
	Nov 08 10:32:40 old-k8s-version-171136 kubelet[780]: I1108 10:32:40.893080     780 scope.go:117] "RemoveContainer" containerID="f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236"
	Nov 08 10:32:40 old-k8s-version-171136 kubelet[780]: E1108 10:32:40.893336     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-45n9d_kubernetes-dashboard(4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d" podUID="4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a"
	Nov 08 10:32:41 old-k8s-version-171136 kubelet[780]: I1108 10:32:41.896763     780 scope.go:117] "RemoveContainer" containerID="f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236"
	Nov 08 10:32:41 old-k8s-version-171136 kubelet[780]: E1108 10:32:41.897045     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-45n9d_kubernetes-dashboard(4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d" podUID="4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a"
	Nov 08 10:32:44 old-k8s-version-171136 kubelet[780]: I1108 10:32:44.984615     780 scope.go:117] "RemoveContainer" containerID="f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236"
	Nov 08 10:32:44 old-k8s-version-171136 kubelet[780]: E1108 10:32:44.985456     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-45n9d_kubernetes-dashboard(4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d" podUID="4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a"
	Nov 08 10:32:52 old-k8s-version-171136 kubelet[780]: I1108 10:32:52.924240     780 scope.go:117] "RemoveContainer" containerID="246af0d96cd99263d477cfcfde9cf5b96d4eb41bbf3703a2a45a5b4e53cc84de"
	Nov 08 10:32:52 old-k8s-version-171136 kubelet[780]: I1108 10:32:52.949108     780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-k8zsb" podStartSLOduration=10.948881728 podCreationTimestamp="2025-11-08 10:32:33 +0000 UTC" firstStartedPulling="2025-11-08 10:32:35.054085974 +0000 UTC m=+19.470411943" lastFinishedPulling="2025-11-08 10:32:44.054253866 +0000 UTC m=+28.470579843" observedRunningTime="2025-11-08 10:32:44.919496436 +0000 UTC m=+29.335822405" watchObservedRunningTime="2025-11-08 10:32:52.949049628 +0000 UTC m=+37.365375605"
	Nov 08 10:33:00 old-k8s-version-171136 kubelet[780]: I1108 10:33:00.770930     780 scope.go:117] "RemoveContainer" containerID="f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236"
	Nov 08 10:33:00 old-k8s-version-171136 kubelet[780]: I1108 10:33:00.943883     780 scope.go:117] "RemoveContainer" containerID="f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236"
	Nov 08 10:33:00 old-k8s-version-171136 kubelet[780]: I1108 10:33:00.944151     780 scope.go:117] "RemoveContainer" containerID="b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e"
	Nov 08 10:33:00 old-k8s-version-171136 kubelet[780]: E1108 10:33:00.944421     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-45n9d_kubernetes-dashboard(4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d" podUID="4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a"
	Nov 08 10:33:04 old-k8s-version-171136 kubelet[780]: I1108 10:33:04.984201     780 scope.go:117] "RemoveContainer" containerID="b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e"
	Nov 08 10:33:04 old-k8s-version-171136 kubelet[780]: E1108 10:33:04.985008     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-45n9d_kubernetes-dashboard(4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d" podUID="4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a"
	Nov 08 10:33:11 old-k8s-version-171136 kubelet[780]: I1108 10:33:11.242921     780 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 08 10:33:11 old-k8s-version-171136 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:33:11 old-k8s-version-171136 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:33:11 old-k8s-version-171136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [25fa1ac9d4bca6c7f5c615c071f8779149b53aa42686a12af40926c011b98b71] <==
	2025/11/08 10:32:44 Starting overwatch
	2025/11/08 10:32:44 Using namespace: kubernetes-dashboard
	2025/11/08 10:32:44 Using in-cluster config to connect to apiserver
	2025/11/08 10:32:44 Using secret token for csrf signing
	2025/11/08 10:32:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:32:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:32:44 Successful initial request to the apiserver, version: v1.28.0
	2025/11/08 10:32:44 Generating JWE encryption key
	2025/11/08 10:32:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:32:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:32:45 Initializing JWE encryption key from synchronized object
	2025/11/08 10:32:45 Creating in-cluster Sidecar client
	2025/11/08 10:32:45 Serving insecurely on HTTP port: 9090
	2025/11/08 10:32:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [246af0d96cd99263d477cfcfde9cf5b96d4eb41bbf3703a2a45a5b4e53cc84de] <==
	I1108 10:32:22.475691       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:32:52.485221       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [758b80ab804e552feb3f98e52fa667d161d85d3bab2614ec5c6efe8963ea3698] <==
	I1108 10:32:52.976153       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:32:52.990681       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:32:52.990816       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 10:33:10.389781       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:33:10.390188       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a450f0e-2def-442b-8030-194bd9a30378", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-171136_7a5e9898-64f9-45c4-a103-46258ada2a91 became leader
	I1108 10:33:10.390255       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-171136_7a5e9898-64f9-45c4-a103-46258ada2a91!
	I1108 10:33:10.490897       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-171136_7a5e9898-64f9-45c4-a103-46258ada2a91!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-171136 -n old-k8s-version-171136
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-171136 -n old-k8s-version-171136: exit status 2 (378.486388ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-171136 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-171136
helpers_test.go:243: (dbg) docker inspect old-k8s-version-171136:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d",
	        "Created": "2025-11-08T10:30:49.022889439Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1209125,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:32:08.741273141Z",
	            "FinishedAt": "2025-11-08T10:32:07.904545424Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/hosts",
	        "LogPath": "/var/lib/docker/containers/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d-json.log",
	        "Name": "/old-k8s-version-171136",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-171136:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-171136",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d",
	                "LowerDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a271db991ac83c4125fead9e6482b51b01105fd2df0dac0c2da512a9f6083e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-171136",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-171136/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-171136",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-171136",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-171136",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "183393353d13bc2f7402a4414fec9ceba21ee1c49c86570517763443eaeb522b",
	            "SandboxKey": "/var/run/docker/netns/183393353d13",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34512"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34513"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34516"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34514"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34515"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-171136": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:50:9a:35:75:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de4af9df12e1c8f538a1e008be00be15053361dbab11b5398b5ceb5166430671",
	                    "EndpointID": "49333c1233b638479849e812fc65bff81ae85c02fe07fc4e3060509059e4fcd5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-171136",
	                        "b7cf45de166d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-171136 -n old-k8s-version-171136
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-171136 -n old-k8s-version-171136: exit status 2 (384.905578ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-171136 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-171136 logs -n 25: (1.254030036s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-731120 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo containerd config dump                                                                                                                                                                                                  │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo crio config                                                                                                                                                                                                             │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ delete  │ -p cilium-731120                                                                                                                                                                                                                              │ cilium-731120             │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p force-systemd-env-680693 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-680693  │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ delete  │ -p kubernetes-upgrade-666491                                                                                                                                                                                                                  │ kubernetes-upgrade-666491 │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-837698    │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p force-systemd-env-680693                                                                                                                                                                                                                   │ force-systemd-env-680693  │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p cert-options-517657 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:30 UTC │
	│ ssh     │ cert-options-517657 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ ssh     │ -p cert-options-517657 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-517657                                                                                                                                                                                                                        │ cert-options-517657       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-171136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-171136 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │ 08 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-171136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ image   │ old-k8s-version-171136 image list --format=json                                                                                                                                                                                               │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-171136 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-171136    │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:32:08
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:32:08.444611 1208998 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:32:08.444756 1208998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:32:08.444767 1208998 out.go:374] Setting ErrFile to fd 2...
	I1108 10:32:08.444779 1208998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:32:08.445163 1208998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:32:08.445618 1208998 out.go:368] Setting JSON to false
	I1108 10:32:08.446672 1208998 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33274,"bootTime":1762564655,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:32:08.446779 1208998 start.go:143] virtualization:  
	I1108 10:32:08.449881 1208998 out.go:179] * [old-k8s-version-171136] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:32:08.453722 1208998 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:32:08.453807 1208998 notify.go:221] Checking for updates...
	I1108 10:32:08.459719 1208998 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:32:08.462687 1208998 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:32:08.465668 1208998 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:32:08.468575 1208998 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:32:08.471519 1208998 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:32:08.474949 1208998 config.go:182] Loaded profile config "old-k8s-version-171136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:32:08.478485 1208998 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1108 10:32:08.481468 1208998 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:32:08.517131 1208998 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:32:08.517311 1208998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:32:08.572147 1208998 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:32:08.562098072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:32:08.572258 1208998 docker.go:319] overlay module found
	I1108 10:32:08.575315 1208998 out.go:179] * Using the docker driver based on existing profile
	I1108 10:32:08.578148 1208998 start.go:309] selected driver: docker
	I1108 10:32:08.578173 1208998 start.go:930] validating driver "docker" against &{Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:32:08.578273 1208998 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:32:08.578949 1208998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:32:08.646272 1208998 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:32:08.637213806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:32:08.646664 1208998 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:32:08.646694 1208998 cni.go:84] Creating CNI manager for ""
	I1108 10:32:08.646751 1208998 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:32:08.646786 1208998 start.go:353] cluster config:
	{Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:32:08.651849 1208998 out.go:179] * Starting "old-k8s-version-171136" primary control-plane node in "old-k8s-version-171136" cluster
	I1108 10:32:08.654639 1208998 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:32:08.657569 1208998 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:32:08.660403 1208998 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:32:08.660498 1208998 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1108 10:32:08.660512 1208998 cache.go:59] Caching tarball of preloaded images
	I1108 10:32:08.660522 1208998 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:32:08.660592 1208998 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:32:08.660604 1208998 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1108 10:32:08.660714 1208998 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/config.json ...
	I1108 10:32:08.679145 1208998 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:32:08.679165 1208998 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:32:08.679178 1208998 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:32:08.679200 1208998 start.go:360] acquireMachinesLock for old-k8s-version-171136: {Name:mk3d8c83478e2975fc25a9dafdc0d687aa9eb7c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:32:08.679254 1208998 start.go:364] duration metric: took 35.904µs to acquireMachinesLock for "old-k8s-version-171136"
	I1108 10:32:08.679273 1208998 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:32:08.679279 1208998 fix.go:54] fixHost starting: 
	I1108 10:32:08.679561 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:08.705454 1208998 fix.go:112] recreateIfNeeded on old-k8s-version-171136: state=Stopped err=<nil>
	W1108 10:32:08.705494 1208998 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 10:32:08.708557 1208998 out.go:252] * Restarting existing docker container for "old-k8s-version-171136" ...
	I1108 10:32:08.708645 1208998 cli_runner.go:164] Run: docker start old-k8s-version-171136
	I1108 10:32:08.967510 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:08.996909 1208998 kic.go:430] container "old-k8s-version-171136" state is running.
	I1108 10:32:08.997278 1208998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-171136
	I1108 10:32:09.018092 1208998 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/config.json ...
	I1108 10:32:09.018335 1208998 machine.go:94] provisionDockerMachine start ...
	I1108 10:32:09.018404 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:09.040777 1208998 main.go:143] libmachine: Using SSH client type: native
	I1108 10:32:09.041180 1208998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1108 10:32:09.041199 1208998 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:32:09.042011 1208998 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:32:12.196029 1208998 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171136
	
	I1108 10:32:12.196053 1208998 ubuntu.go:182] provisioning hostname "old-k8s-version-171136"
	I1108 10:32:12.196125 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:12.213647 1208998 main.go:143] libmachine: Using SSH client type: native
	I1108 10:32:12.214009 1208998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1108 10:32:12.214028 1208998 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171136 && echo "old-k8s-version-171136" | sudo tee /etc/hostname
	I1108 10:32:12.389685 1208998 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171136
	
	I1108 10:32:12.389774 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:12.407588 1208998 main.go:143] libmachine: Using SSH client type: native
	I1108 10:32:12.407922 1208998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1108 10:32:12.407940 1208998 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171136/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:32:12.560891 1208998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:32:12.560958 1208998 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:32:12.560995 1208998 ubuntu.go:190] setting up certificates
	I1108 10:32:12.561036 1208998 provision.go:84] configureAuth start
	I1108 10:32:12.561119 1208998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-171136
	I1108 10:32:12.578280 1208998 provision.go:143] copyHostCerts
	I1108 10:32:12.578347 1208998 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:32:12.578364 1208998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:32:12.578441 1208998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:32:12.578544 1208998 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:32:12.578549 1208998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:32:12.578573 1208998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:32:12.578636 1208998 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:32:12.578641 1208998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:32:12.578666 1208998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:32:12.578721 1208998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171136 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-171136]
	I1108 10:32:13.013655 1208998 provision.go:177] copyRemoteCerts
	I1108 10:32:13.013747 1208998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:32:13.013823 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.031060 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:13.136199 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:32:13.153520 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1108 10:32:13.178828 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:32:13.196782 1208998 provision.go:87] duration metric: took 635.692609ms to configureAuth
	I1108 10:32:13.196807 1208998 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:32:13.197047 1208998 config.go:182] Loaded profile config "old-k8s-version-171136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:32:13.197148 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.213905 1208998 main.go:143] libmachine: Using SSH client type: native
	I1108 10:32:13.214213 1208998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1108 10:32:13.214235 1208998 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:32:13.533570 1208998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:32:13.533598 1208998 machine.go:97] duration metric: took 4.515251516s to provisionDockerMachine
	I1108 10:32:13.533609 1208998 start.go:293] postStartSetup for "old-k8s-version-171136" (driver="docker")
	I1108 10:32:13.533657 1208998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:32:13.533760 1208998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:32:13.533835 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.553388 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:13.664224 1208998 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:32:13.667452 1208998 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:32:13.667491 1208998 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:32:13.667503 1208998 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:32:13.667558 1208998 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:32:13.667645 1208998 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:32:13.667755 1208998 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:32:13.675217 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:32:13.692545 1208998 start.go:296] duration metric: took 158.883747ms for postStartSetup
	I1108 10:32:13.692686 1208998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:32:13.692752 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.712046 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:13.813579 1208998 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:32:13.818438 1208998 fix.go:56] duration metric: took 5.139151846s for fixHost
	I1108 10:32:13.818464 1208998 start.go:83] releasing machines lock for "old-k8s-version-171136", held for 5.139201741s
	I1108 10:32:13.818541 1208998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-171136
	I1108 10:32:13.835617 1208998 ssh_runner.go:195] Run: cat /version.json
	I1108 10:32:13.835677 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.835956 1208998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:32:13.836036 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:13.857822 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:13.870080 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:13.960105 1208998 ssh_runner.go:195] Run: systemctl --version
	I1108 10:32:14.054386 1208998 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:32:14.092829 1208998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:32:14.097796 1208998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:32:14.097875 1208998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:32:14.106901 1208998 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:32:14.106930 1208998 start.go:496] detecting cgroup driver to use...
	I1108 10:32:14.106971 1208998 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:32:14.107022 1208998 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:32:14.123025 1208998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:32:14.136379 1208998 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:32:14.136507 1208998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:32:14.151953 1208998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:32:14.166342 1208998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:32:14.289770 1208998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:32:14.419099 1208998 docker.go:234] disabling docker service ...
	I1108 10:32:14.419221 1208998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:32:14.435833 1208998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:32:14.449180 1208998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:32:14.574518 1208998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:32:14.697250 1208998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:32:14.710717 1208998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:32:14.727261 1208998 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 10:32:14.727324 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.738350 1208998 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:32:14.738421 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.748617 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.758165 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.767131 1208998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:32:14.775665 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.785154 1208998 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.793835 1208998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:32:14.805037 1208998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:32:14.812353 1208998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:32:14.819842 1208998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:32:14.941742 1208998 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:32:15.099236 1208998 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:32:15.099317 1208998 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:32:15.103539 1208998 start.go:564] Will wait 60s for crictl version
	I1108 10:32:15.103608 1208998 ssh_runner.go:195] Run: which crictl
	I1108 10:32:15.107665 1208998 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:32:15.134134 1208998 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:32:15.134223 1208998 ssh_runner.go:195] Run: crio --version
	I1108 10:32:15.164633 1208998 ssh_runner.go:195] Run: crio --version
	I1108 10:32:15.203652 1208998 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1108 10:32:15.206547 1208998 cli_runner.go:164] Run: docker network inspect old-k8s-version-171136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:32:15.222865 1208998 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:32:15.226922 1208998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:32:15.237112 1208998 kubeadm.go:884] updating cluster {Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:32:15.237225 1208998 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 10:32:15.237279 1208998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:32:15.276060 1208998 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:32:15.276081 1208998 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:32:15.276138 1208998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:32:15.303971 1208998 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:32:15.303996 1208998 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:32:15.304005 1208998 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1108 10:32:15.304106 1208998 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-171136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:32:15.304187 1208998 ssh_runner.go:195] Run: crio config
	I1108 10:32:15.363353 1208998 cni.go:84] Creating CNI manager for ""
	I1108 10:32:15.363378 1208998 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:32:15.363396 1208998 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:32:15.363420 1208998 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-171136 NodeName:old-k8s-version-171136 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:32:15.363567 1208998 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-171136"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:32:15.363642 1208998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1108 10:32:15.371404 1208998 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:32:15.371493 1208998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:32:15.379036 1208998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1108 10:32:15.391671 1208998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:32:15.404700 1208998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1108 10:32:15.418870 1208998 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:32:15.423099 1208998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:32:15.433431 1208998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:32:15.563019 1208998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:32:15.580661 1208998 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136 for IP: 192.168.85.2
	I1108 10:32:15.580726 1208998 certs.go:195] generating shared ca certs ...
	I1108 10:32:15.580760 1208998 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:32:15.580950 1208998 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:32:15.581024 1208998 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:32:15.581060 1208998 certs.go:257] generating profile certs ...
	I1108 10:32:15.581201 1208998 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.key
	I1108 10:32:15.581325 1208998 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.key.3f7b60cf
	I1108 10:32:15.581389 1208998 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.key
	I1108 10:32:15.581542 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:32:15.581610 1208998 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:32:15.581648 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:32:15.581702 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:32:15.581760 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:32:15.581806 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:32:15.581890 1208998 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:32:15.582578 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:32:15.604559 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:32:15.625410 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:32:15.646653 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:32:15.667793 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1108 10:32:15.693485 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:32:15.714394 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:32:15.737502 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:32:15.770371 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:32:15.797233 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:32:15.822749 1208998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:32:15.843773 1208998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:32:15.858999 1208998 ssh_runner.go:195] Run: openssl version
	I1108 10:32:15.867116 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:32:15.877530 1208998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:32:15.881547 1208998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:32:15.881626 1208998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:32:15.930110 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:32:15.938867 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:32:15.947566 1208998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:32:15.951364 1208998 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:32:15.951472 1208998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:32:15.997419 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:32:16.011872 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:32:16.020901 1208998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:32:16.032833 1208998 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:32:16.032921 1208998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:32:16.074871 1208998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:32:16.082996 1208998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:32:16.086888 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:32:16.128302 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:32:16.169901 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:32:16.236567 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:32:16.297561 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:32:16.381127 1208998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:32:16.474342 1208998 kubeadm.go:401] StartCluster: {Name:old-k8s-version-171136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-171136 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:32:16.474428 1208998 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:32:16.474492 1208998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:32:16.520504 1208998 cri.go:89] found id: "db8c533fb06e8ef7402212f3c434623824a29c9cf817e134cf0d1695471f2609"
	I1108 10:32:16.520528 1208998 cri.go:89] found id: "6002b979fafdf69a44654d6dde5cc544aca07f7cc8a38cab91edafb52c08cd41"
	I1108 10:32:16.520534 1208998 cri.go:89] found id: "c0039ca9f9316f54572320f29d7cfdc22e2d6bf9c3d7f61d16d19d0dfce14965"
	I1108 10:32:16.520543 1208998 cri.go:89] found id: "86455d1631572d37d82679402ce9bf75876840bd25c547b5d518b6af7ce1c24d"
	I1108 10:32:16.520547 1208998 cri.go:89] found id: ""
	I1108 10:32:16.520596 1208998 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:32:16.538075 1208998 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:32:16Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:32:16.538144 1208998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:32:16.548302 1208998 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:32:16.548324 1208998 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:32:16.548374 1208998 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:32:16.559534 1208998 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:32:16.560085 1208998 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-171136" does not appear in /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:32:16.560357 1208998 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-1027379/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-171136" cluster setting kubeconfig missing "old-k8s-version-171136" context setting]
	I1108 10:32:16.560876 1208998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:32:16.562427 1208998 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:32:16.570292 1208998 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:32:16.570370 1208998 kubeadm.go:602] duration metric: took 22.038941ms to restartPrimaryControlPlane
	I1108 10:32:16.570395 1208998 kubeadm.go:403] duration metric: took 96.063127ms to StartCluster
	I1108 10:32:16.570444 1208998 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:32:16.570539 1208998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:32:16.571566 1208998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:32:16.571866 1208998 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:32:16.572307 1208998 config.go:182] Loaded profile config "old-k8s-version-171136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 10:32:16.572381 1208998 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:32:16.572578 1208998 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-171136"
	I1108 10:32:16.572592 1208998 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-171136"
	W1108 10:32:16.572598 1208998 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:32:16.572624 1208998 host.go:66] Checking if "old-k8s-version-171136" exists ...
	I1108 10:32:16.572645 1208998 addons.go:70] Setting dashboard=true in profile "old-k8s-version-171136"
	I1108 10:32:16.572660 1208998 addons.go:239] Setting addon dashboard=true in "old-k8s-version-171136"
	W1108 10:32:16.572666 1208998 addons.go:248] addon dashboard should already be in state true
	I1108 10:32:16.572686 1208998 host.go:66] Checking if "old-k8s-version-171136" exists ...
	I1108 10:32:16.573088 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:16.573308 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:16.573676 1208998 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-171136"
	I1108 10:32:16.573693 1208998 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-171136"
	I1108 10:32:16.573970 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:16.584746 1208998 out.go:179] * Verifying Kubernetes components...
	I1108 10:32:16.591402 1208998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:32:16.632315 1208998 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-171136"
	W1108 10:32:16.632348 1208998 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:32:16.632374 1208998 host.go:66] Checking if "old-k8s-version-171136" exists ...
	I1108 10:32:16.632854 1208998 cli_runner.go:164] Run: docker container inspect old-k8s-version-171136 --format={{.State.Status}}
	I1108 10:32:16.640431 1208998 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:32:16.643666 1208998 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:32:16.643734 1208998 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:32:16.643749 1208998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:32:16.643816 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:16.656227 1208998 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:32:16.659570 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:32:16.659597 1208998 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:32:16.659669 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:16.687620 1208998 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:32:16.687643 1208998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:32:16.687706 1208998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-171136
	I1108 10:32:16.720839 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:16.728571 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:16.745019 1208998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/old-k8s-version-171136/id_rsa Username:docker}
	I1108 10:32:16.900282 1208998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:32:16.940338 1208998 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-171136" to be "Ready" ...
	I1108 10:32:16.974246 1208998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:32:16.976996 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:32:16.977018 1208998 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:32:17.037892 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:32:17.037961 1208998 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:32:17.086066 1208998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:32:17.107120 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:32:17.107193 1208998 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:32:17.184956 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:32:17.185020 1208998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:32:17.238242 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:32:17.238317 1208998 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:32:17.317098 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:32:17.317172 1208998 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:32:17.339544 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:32:17.339617 1208998 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:32:17.358394 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:32:17.358473 1208998 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:32:17.376545 1208998 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:32:17.376607 1208998 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:32:17.389589 1208998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:32:21.218701 1208998 node_ready.go:49] node "old-k8s-version-171136" is "Ready"
	I1108 10:32:21.218779 1208998 node_ready.go:38] duration metric: took 4.278358595s for node "old-k8s-version-171136" to be "Ready" ...
	I1108 10:32:21.218828 1208998 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:32:21.218936 1208998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:32:22.884078 1208998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.909798403s)
	I1108 10:32:22.884140 1208998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.798004256s)
	I1108 10:32:23.410432 1208998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.020753033s)
	I1108 10:32:23.410655 1208998 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.191688533s)
	I1108 10:32:23.410676 1208998 api_server.go:72] duration metric: took 6.838754901s to wait for apiserver process to appear ...
	I1108 10:32:23.410696 1208998 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:32:23.410726 1208998 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:32:23.413738 1208998 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-171136 addons enable metrics-server
	
	I1108 10:32:23.416744 1208998 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1108 10:32:23.419735 1208998 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:32:23.419972 1208998 addons.go:515] duration metric: took 6.847587602s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1108 10:32:23.421308 1208998 api_server.go:141] control plane version: v1.28.0
	I1108 10:32:23.421333 1208998 api_server.go:131] duration metric: took 10.622437ms to wait for apiserver health ...
	I1108 10:32:23.421346 1208998 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:32:23.425935 1208998 system_pods.go:59] 8 kube-system pods found
	I1108 10:32:23.425981 1208998 system_pods.go:61] "coredns-5dd5756b68-5m4ph" [08005efc-5866-444b-a834-f1b18d38717c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:32:23.425991 1208998 system_pods.go:61] "etcd-old-k8s-version-171136" [0bf47fe6-f4be-4f1e-adb6-9e157b6b92da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:32:23.425997 1208998 system_pods.go:61] "kindnet-bg4r4" [bc043139-6bce-4061-a3c6-e733d1e90763] Running
	I1108 10:32:23.426005 1208998 system_pods.go:61] "kube-apiserver-old-k8s-version-171136" [05958bfe-f331-4b7b-a251-b6888cb928af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:32:23.426017 1208998 system_pods.go:61] "kube-controller-manager-old-k8s-version-171136" [6e7e2c08-dad2-46e2-a419-96803b5758c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:32:23.426025 1208998 system_pods.go:61] "kube-proxy-8ml4s" [40f4282d-0202-4179-953a-3fd511afbaa5] Running
	I1108 10:32:23.426032 1208998 system_pods.go:61] "kube-scheduler-old-k8s-version-171136" [dcbcba65-c6f8-45cd-a9fa-af29cd3b4ab6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:32:23.426041 1208998 system_pods.go:61] "storage-provisioner" [66060f62-f048-459b-885f-8fa591cafed6] Running
	I1108 10:32:23.426047 1208998 system_pods.go:74] duration metric: took 4.694918ms to wait for pod list to return data ...
	I1108 10:32:23.426055 1208998 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:32:23.428760 1208998 default_sa.go:45] found service account: "default"
	I1108 10:32:23.428787 1208998 default_sa.go:55] duration metric: took 2.72131ms for default service account to be created ...
	I1108 10:32:23.428797 1208998 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:32:23.432583 1208998 system_pods.go:86] 8 kube-system pods found
	I1108 10:32:23.432625 1208998 system_pods.go:89] "coredns-5dd5756b68-5m4ph" [08005efc-5866-444b-a834-f1b18d38717c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:32:23.432637 1208998 system_pods.go:89] "etcd-old-k8s-version-171136" [0bf47fe6-f4be-4f1e-adb6-9e157b6b92da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:32:23.432642 1208998 system_pods.go:89] "kindnet-bg4r4" [bc043139-6bce-4061-a3c6-e733d1e90763] Running
	I1108 10:32:23.432650 1208998 system_pods.go:89] "kube-apiserver-old-k8s-version-171136" [05958bfe-f331-4b7b-a251-b6888cb928af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:32:23.432661 1208998 system_pods.go:89] "kube-controller-manager-old-k8s-version-171136" [6e7e2c08-dad2-46e2-a419-96803b5758c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:32:23.432666 1208998 system_pods.go:89] "kube-proxy-8ml4s" [40f4282d-0202-4179-953a-3fd511afbaa5] Running
	I1108 10:32:23.432673 1208998 system_pods.go:89] "kube-scheduler-old-k8s-version-171136" [dcbcba65-c6f8-45cd-a9fa-af29cd3b4ab6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:32:23.432679 1208998 system_pods.go:89] "storage-provisioner" [66060f62-f048-459b-885f-8fa591cafed6] Running
	I1108 10:32:23.432687 1208998 system_pods.go:126] duration metric: took 3.883973ms to wait for k8s-apps to be running ...
	I1108 10:32:23.432700 1208998 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:32:23.432759 1208998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:32:23.448367 1208998 system_svc.go:56] duration metric: took 15.657258ms WaitForService to wait for kubelet
	I1108 10:32:23.448396 1208998 kubeadm.go:587] duration metric: took 6.876472713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:32:23.448415 1208998 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:32:23.451801 1208998 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:32:23.451831 1208998 node_conditions.go:123] node cpu capacity is 2
	I1108 10:32:23.451844 1208998 node_conditions.go:105] duration metric: took 3.423187ms to run NodePressure ...
	I1108 10:32:23.451856 1208998 start.go:242] waiting for startup goroutines ...
	I1108 10:32:23.451864 1208998 start.go:247] waiting for cluster config update ...
	I1108 10:32:23.451875 1208998 start.go:256] writing updated cluster config ...
	I1108 10:32:23.452167 1208998 ssh_runner.go:195] Run: rm -f paused
	I1108 10:32:23.456165 1208998 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:32:23.460883 1208998 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5m4ph" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:32:25.467136 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:27.467775 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:29.967342 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:32.467531 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:34.967426 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:36.967708 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:38.967933 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:41.468726 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:43.967233 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:45.967287 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:48.467645 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:50.967095 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:52.971807 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	W1108 10:32:55.467201 1208998 pod_ready.go:104] pod "coredns-5dd5756b68-5m4ph" is not "Ready", error: <nil>
	I1108 10:32:57.476293 1208998 pod_ready.go:94] pod "coredns-5dd5756b68-5m4ph" is "Ready"
	I1108 10:32:57.476325 1208998 pod_ready.go:86] duration metric: took 34.01541659s for pod "coredns-5dd5756b68-5m4ph" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.481084 1208998 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.489176 1208998 pod_ready.go:94] pod "etcd-old-k8s-version-171136" is "Ready"
	I1108 10:32:57.489209 1208998 pod_ready.go:86] duration metric: took 8.095872ms for pod "etcd-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.493477 1208998 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.499614 1208998 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-171136" is "Ready"
	I1108 10:32:57.499645 1208998 pod_ready.go:86] duration metric: took 6.144006ms for pod "kube-apiserver-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.502872 1208998 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.665000 1208998 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-171136" is "Ready"
	I1108 10:32:57.665026 1208998 pod_ready.go:86] duration metric: took 162.123584ms for pod "kube-controller-manager-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:57.864627 1208998 pod_ready.go:83] waiting for pod "kube-proxy-8ml4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:58.265376 1208998 pod_ready.go:94] pod "kube-proxy-8ml4s" is "Ready"
	I1108 10:32:58.265402 1208998 pod_ready.go:86] duration metric: took 400.748994ms for pod "kube-proxy-8ml4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:58.465310 1208998 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:58.864588 1208998 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-171136" is "Ready"
	I1108 10:32:58.864616 1208998 pod_ready.go:86] duration metric: took 399.270935ms for pod "kube-scheduler-old-k8s-version-171136" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:32:58.864630 1208998 pod_ready.go:40] duration metric: took 35.408432576s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:32:58.920086 1208998 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1108 10:32:58.923025 1208998 out.go:203] 
	W1108 10:32:58.925958 1208998 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 10:32:58.928726 1208998 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 10:32:58.931656 1208998 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-171136" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.776847036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.7837676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.784565582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.800565931Z" level=info msg="Created container b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d/dashboard-metrics-scraper" id=eb1adab6-b6d6-4693-a0d5-ee038fc88813 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.80346541Z" level=info msg="Starting container: b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e" id=88d70a1b-163a-4a7c-8e32-0a53f81e3af6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.805609245Z" level=info msg="Started container" PID=1659 containerID=b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d/dashboard-metrics-scraper id=88d70a1b-163a-4a7c-8e32-0a53f81e3af6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d24aefa7f5a60f89d6773e30de00babd8c99889d518a972d938676b92ca1010e
	Nov 08 10:33:00 old-k8s-version-171136 conmon[1657]: conmon b41545e6757e9358ceca <ninfo>: container 1659 exited with status 1
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.955831327Z" level=info msg="Removing container: f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236" id=9d190a39-ce6b-429f-a2ac-7e78d9035f03 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.965885932Z" level=info msg="Error loading conmon cgroup of container f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236: cgroup deleted" id=9d190a39-ce6b-429f-a2ac-7e78d9035f03 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:33:00 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:00.969443647Z" level=info msg="Removed container f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d/dashboard-metrics-scraper" id=9d190a39-ce6b-429f-a2ac-7e78d9035f03 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.73275161Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.738431065Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.738477693Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.738500093Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.741933365Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.74196913Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.741992587Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.745305076Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.745336697Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.745358145Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.749177276Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.749217422Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.749242783Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.752283083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:33:02 old-k8s-version-171136 crio[653]: time="2025-11-08T10:33:02.752314942Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	b41545e6757e9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   2                   d24aefa7f5a60       dashboard-metrics-scraper-5f989dc9cf-45n9d       kubernetes-dashboard
	758b80ab804e5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   d30c51f2c4440       storage-provisioner                              kube-system
	25fa1ac9d4bca       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   2141302349a26       kubernetes-dashboard-8694d4445c-k8zsb            kubernetes-dashboard
	2b33161a0a491       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   eab19e7913c48       coredns-5dd5756b68-5m4ph                         kube-system
	ff8ff4eb956b1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   4efd7a751e50c       busybox                                          default
	3c438edaaae97       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   b537f4843472b       kindnet-bg4r4                                    kube-system
	a1ea9a35262a2       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           54 seconds ago       Running             kube-proxy                  1                   f8a96b4184a68       kube-proxy-8ml4s                                 kube-system
	246af0d96cd99       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   d30c51f2c4440       storage-provisioner                              kube-system
	db8c533fb06e8       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   6729f05e8a503       etcd-old-k8s-version-171136                      kube-system
	6002b979fafdf       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   80631831d0484       kube-scheduler-old-k8s-version-171136            kube-system
	c0039ca9f9316       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   59aa150768ca3       kube-controller-manager-old-k8s-version-171136   kube-system
	86455d1631572       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   b8a922c15aa93       kube-apiserver-old-k8s-version-171136            kube-system
	
	
	==> coredns [2b33161a0a491a5086c7c8ae7d045c0558f8c2fc886a2ba82e34c1b419eac34b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40102 - 44691 "HINFO IN 6628585141341047097.5452165239705701379. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012847631s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-171136
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-171136
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=old-k8s-version-171136
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_31_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:31:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-171136
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:33:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:32:51 +0000   Sat, 08 Nov 2025 10:31:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:32:51 +0000   Sat, 08 Nov 2025 10:31:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:32:51 +0000   Sat, 08 Nov 2025 10:31:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:32:51 +0000   Sat, 08 Nov 2025 10:31:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-171136
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                abac0900-0998-47c3-b513-18b6d2fce4e7
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-5m4ph                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-171136                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-bg4r4                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-171136             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-171136    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-8ml4s                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-171136             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-45n9d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-k8zsb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-171136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-171136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-171136 event: Registered Node old-k8s-version-171136 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-171136 status is now: NodeReady
	  Normal  Starting                 61s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node old-k8s-version-171136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)    kubelet          Node old-k8s-version-171136 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                  node-controller  Node old-k8s-version-171136 event: Registered Node old-k8s-version-171136 in Controller
	
	
	==> dmesg <==
	[Nov 8 10:10] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:11] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[ +18.424643] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [db8c533fb06e8ef7402212f3c434623824a29c9cf817e134cf0d1695471f2609] <==
	{"level":"info","ts":"2025-11-08T10:32:16.912889Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T10:32:16.912924Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T10:32:16.913158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-08T10:32:16.913248Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-08T10:32:16.913375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:32:16.913429Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T10:32:16.916754Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-08T10:32:16.920679Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-08T10:32:16.916899Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:32:16.921158Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-08T10:32:16.921254Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-08T10:32:18.71769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-08T10:32:18.717736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-08T10:32:18.717769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-08T10:32:18.717788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-08T10:32:18.717795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-08T10:32:18.717805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-08T10:32:18.717823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-08T10:32:18.7218Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-171136 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T10:32:18.721847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:32:18.722839Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-08T10:32:18.723008Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T10:32:18.723871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-08T10:32:18.727812Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T10:32:18.727854Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:33:16 up  9:15,  0 user,  load average: 2.24, 3.14, 2.72
	Linux old-k8s-version-171136 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c438edaaae97ac5fc21d3e9f7a5bfc1abf55d6f94c1d40caf872c0f88407309] <==
	I1108 10:32:22.528158       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:32:22.612796       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:32:22.612934       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:32:22.612947       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:32:22.612961       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:32:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:32:22.733491       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:32:22.733519       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:32:22.733528       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:32:22.733630       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:32:52.733108       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:32:52.733111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:32:52.733361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:32:52.733477       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:32:54.334555       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:32:54.334587       1 metrics.go:72] Registering metrics
	I1108 10:32:54.334665       1 controller.go:711] "Syncing nftables rules"
	I1108 10:33:02.732403       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:33:02.732501       1 main.go:301] handling current node
	I1108 10:33:12.738135       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:33:12.738168       1 main.go:301] handling current node
	
	
	==> kube-apiserver [86455d1631572d37d82679402ce9bf75876840bd25c547b5d518b6af7ce1c24d] <==
	I1108 10:32:21.256021       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:32:21.284070       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:32:21.292674       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 10:32:21.292821       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 10:32:21.294034       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 10:32:21.294835       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 10:32:21.294510       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 10:32:21.294613       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 10:32:21.296943       1 aggregator.go:166] initial CRD sync complete...
	I1108 10:32:21.296989       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 10:32:21.297018       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:32:21.297046       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:32:21.339450       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1108 10:32:21.378628       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:32:21.996946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:32:23.152388       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 10:32:23.217050       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 10:32:23.258041       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:32:23.282508       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:32:23.295528       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 10:32:23.379247       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.149.186"}
	I1108 10:32:23.402818       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.215.210"}
	I1108 10:32:33.695505       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:32:33.701940       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1108 10:32:33.738167       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c0039ca9f9316f54572320f29d7cfdc22e2d6bf9c3d7f61d16d19d0dfce14965] <==
	I1108 10:32:33.765660       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-45n9d"
	I1108 10:32:33.783580       1 shared_informer.go:318] Caches are synced for cronjob
	I1108 10:32:33.800857       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 10:32:33.801130       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-k8zsb"
	I1108 10:32:33.822988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.392245ms"
	I1108 10:32:33.834774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="104.46423ms"
	I1108 10:32:33.840527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="17.417254ms"
	I1108 10:32:33.840734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.288µs"
	I1108 10:32:33.850042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.011µs"
	I1108 10:32:33.855816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.643894ms"
	I1108 10:32:33.855998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="70.381µs"
	I1108 10:32:33.874369       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 10:32:33.881591       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.642µs"
	I1108 10:32:34.213404       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:32:34.213453       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 10:32:34.227584       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 10:32:39.898872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.851µs"
	I1108 10:32:40.915970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.934µs"
	I1108 10:32:41.915988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.895µs"
	I1108 10:32:44.936854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.670144ms"
	I1108 10:32:44.937147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.604µs"
	I1108 10:32:57.467710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.508364ms"
	I1108 10:32:57.469535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.306µs"
	I1108 10:33:00.965462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.325µs"
	I1108 10:33:04.998996       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.464µs"
	
	
	==> kube-proxy [a1ea9a35262a2ecf211dbe2bd4eb8aa0b383c6dde45b73c0eb91cf2e3d64d7d1] <==
	I1108 10:32:22.537749       1 server_others.go:69] "Using iptables proxy"
	I1108 10:32:22.575734       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1108 10:32:22.614390       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:32:22.632340       1 server_others.go:152] "Using iptables Proxier"
	I1108 10:32:22.632378       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 10:32:22.632387       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 10:32:22.632414       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 10:32:22.632872       1 server.go:846] "Version info" version="v1.28.0"
	I1108 10:32:22.632886       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:32:22.634129       1 config.go:188] "Starting service config controller"
	I1108 10:32:22.634200       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 10:32:22.634243       1 config.go:97] "Starting endpoint slice config controller"
	I1108 10:32:22.634285       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 10:32:22.634713       1 config.go:315] "Starting node config controller"
	I1108 10:32:22.634766       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 10:32:22.734756       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 10:32:22.734814       1 shared_informer.go:318] Caches are synced for service config
	I1108 10:32:22.735053       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6002b979fafdf69a44654d6dde5cc544aca07f7cc8a38cab91edafb52c08cd41] <==
	I1108 10:32:19.858223       1 serving.go:348] Generated self-signed cert in-memory
	I1108 10:32:21.568088       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1108 10:32:21.568234       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:32:21.573154       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1108 10:32:21.573361       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1108 10:32:21.573405       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1108 10:32:21.573448       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 10:32:21.581071       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:32:21.586697       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 10:32:21.585959       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:32:21.587099       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 10:32:21.677515       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1108 10:32:21.688560       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 10:32:21.688636       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Nov 08 10:32:33 old-k8s-version-171136 kubelet[780]: I1108 10:32:33.908256     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-45n9d\" (UID: \"4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d"
	Nov 08 10:32:33 old-k8s-version-171136 kubelet[780]: I1108 10:32:33.908349     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hxcn\" (UniqueName: \"kubernetes.io/projected/4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a-kube-api-access-5hxcn\") pod \"dashboard-metrics-scraper-5f989dc9cf-45n9d\" (UID: \"4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d"
	Nov 08 10:32:33 old-k8s-version-171136 kubelet[780]: I1108 10:32:33.908430     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdl4c\" (UniqueName: \"kubernetes.io/projected/16871c16-e616-4ff3-8dfa-809dcd2a3b26-kube-api-access-jdl4c\") pod \"kubernetes-dashboard-8694d4445c-k8zsb\" (UID: \"16871c16-e616-4ff3-8dfa-809dcd2a3b26\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-k8zsb"
	Nov 08 10:32:35 old-k8s-version-171136 kubelet[780]: W1108 10:32:35.031098     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/crio-d24aefa7f5a60f89d6773e30de00babd8c99889d518a972d938676b92ca1010e WatchSource:0}: Error finding container d24aefa7f5a60f89d6773e30de00babd8c99889d518a972d938676b92ca1010e: Status 404 returned error can't find the container with id d24aefa7f5a60f89d6773e30de00babd8c99889d518a972d938676b92ca1010e
	Nov 08 10:32:35 old-k8s-version-171136 kubelet[780]: W1108 10:32:35.051811     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b7cf45de166dc516ebdb259b98c1833a8d36e847a043d12153ee83f0921ae63d/crio-2141302349a2634516545d12f0fcfb374c9774b6b76c6ca12487ddf553d7f7fc WatchSource:0}: Error finding container 2141302349a2634516545d12f0fcfb374c9774b6b76c6ca12487ddf553d7f7fc: Status 404 returned error can't find the container with id 2141302349a2634516545d12f0fcfb374c9774b6b76c6ca12487ddf553d7f7fc
	Nov 08 10:32:39 old-k8s-version-171136 kubelet[780]: I1108 10:32:39.884574     780 scope.go:117] "RemoveContainer" containerID="710bb8b0c7ed5282688dc29dcefa2a227372d1d60f90cde424238c462ebf6bc9"
	Nov 08 10:32:40 old-k8s-version-171136 kubelet[780]: I1108 10:32:40.892739     780 scope.go:117] "RemoveContainer" containerID="710bb8b0c7ed5282688dc29dcefa2a227372d1d60f90cde424238c462ebf6bc9"
	Nov 08 10:32:40 old-k8s-version-171136 kubelet[780]: I1108 10:32:40.893080     780 scope.go:117] "RemoveContainer" containerID="f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236"
	Nov 08 10:32:40 old-k8s-version-171136 kubelet[780]: E1108 10:32:40.893336     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-45n9d_kubernetes-dashboard(4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d" podUID="4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a"
	Nov 08 10:32:41 old-k8s-version-171136 kubelet[780]: I1108 10:32:41.896763     780 scope.go:117] "RemoveContainer" containerID="f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236"
	Nov 08 10:32:41 old-k8s-version-171136 kubelet[780]: E1108 10:32:41.897045     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-45n9d_kubernetes-dashboard(4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d" podUID="4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a"
	Nov 08 10:32:44 old-k8s-version-171136 kubelet[780]: I1108 10:32:44.984615     780 scope.go:117] "RemoveContainer" containerID="f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236"
	Nov 08 10:32:44 old-k8s-version-171136 kubelet[780]: E1108 10:32:44.985456     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-45n9d_kubernetes-dashboard(4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d" podUID="4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a"
	Nov 08 10:32:52 old-k8s-version-171136 kubelet[780]: I1108 10:32:52.924240     780 scope.go:117] "RemoveContainer" containerID="246af0d96cd99263d477cfcfde9cf5b96d4eb41bbf3703a2a45a5b4e53cc84de"
	Nov 08 10:32:52 old-k8s-version-171136 kubelet[780]: I1108 10:32:52.949108     780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-k8zsb" podStartSLOduration=10.948881728 podCreationTimestamp="2025-11-08 10:32:33 +0000 UTC" firstStartedPulling="2025-11-08 10:32:35.054085974 +0000 UTC m=+19.470411943" lastFinishedPulling="2025-11-08 10:32:44.054253866 +0000 UTC m=+28.470579843" observedRunningTime="2025-11-08 10:32:44.919496436 +0000 UTC m=+29.335822405" watchObservedRunningTime="2025-11-08 10:32:52.949049628 +0000 UTC m=+37.365375605"
	Nov 08 10:33:00 old-k8s-version-171136 kubelet[780]: I1108 10:33:00.770930     780 scope.go:117] "RemoveContainer" containerID="f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236"
	Nov 08 10:33:00 old-k8s-version-171136 kubelet[780]: I1108 10:33:00.943883     780 scope.go:117] "RemoveContainer" containerID="f0763be5454b860f13982f2e3be220c608304fc73d11369328dddc7a2ab10236"
	Nov 08 10:33:00 old-k8s-version-171136 kubelet[780]: I1108 10:33:00.944151     780 scope.go:117] "RemoveContainer" containerID="b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e"
	Nov 08 10:33:00 old-k8s-version-171136 kubelet[780]: E1108 10:33:00.944421     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-45n9d_kubernetes-dashboard(4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d" podUID="4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a"
	Nov 08 10:33:04 old-k8s-version-171136 kubelet[780]: I1108 10:33:04.984201     780 scope.go:117] "RemoveContainer" containerID="b41545e6757e9358ceca65cd9472e42d12a7a3c0badd66c137994c1b8ebe370e"
	Nov 08 10:33:04 old-k8s-version-171136 kubelet[780]: E1108 10:33:04.985008     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-45n9d_kubernetes-dashboard(4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-45n9d" podUID="4e0f14ce-437a-4a55-bfbe-d42bb7fecc3a"
	Nov 08 10:33:11 old-k8s-version-171136 kubelet[780]: I1108 10:33:11.242921     780 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 08 10:33:11 old-k8s-version-171136 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:33:11 old-k8s-version-171136 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:33:11 old-k8s-version-171136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [25fa1ac9d4bca6c7f5c615c071f8779149b53aa42686a12af40926c011b98b71] <==
	2025/11/08 10:32:44 Using namespace: kubernetes-dashboard
	2025/11/08 10:32:44 Using in-cluster config to connect to apiserver
	2025/11/08 10:32:44 Using secret token for csrf signing
	2025/11/08 10:32:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:32:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:32:44 Successful initial request to the apiserver, version: v1.28.0
	2025/11/08 10:32:44 Generating JWE encryption key
	2025/11/08 10:32:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:32:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:32:45 Initializing JWE encryption key from synchronized object
	2025/11/08 10:32:45 Creating in-cluster Sidecar client
	2025/11/08 10:32:45 Serving insecurely on HTTP port: 9090
	2025/11/08 10:32:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:33:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:32:44 Starting overwatch
	
	
	==> storage-provisioner [246af0d96cd99263d477cfcfde9cf5b96d4eb41bbf3703a2a45a5b4e53cc84de] <==
	I1108 10:32:22.475691       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:32:52.485221       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [758b80ab804e552feb3f98e52fa667d161d85d3bab2614ec5c6efe8963ea3698] <==
	I1108 10:32:52.976153       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:32:52.990681       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:32:52.990816       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 10:33:10.389781       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:33:10.390188       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a450f0e-2def-442b-8030-194bd9a30378", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-171136_7a5e9898-64f9-45c4-a103-46258ada2a91 became leader
	I1108 10:33:10.390255       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-171136_7a5e9898-64f9-45c4-a103-46258ada2a91!
	I1108 10:33:10.490897       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-171136_7a5e9898-64f9-45c4-a103-46258ada2a91!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-171136 -n old-k8s-version-171136
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-171136 -n old-k8s-version-171136: exit status 2 (371.357809ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-171136 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-236075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-236075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (284.655242ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:34:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-236075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-236075 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-236075 describe deploy/metrics-server -n kube-system: exit status 1 (92.421913ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-236075 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-236075
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-236075:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf",
	        "Created": "2025-11-08T10:33:26.092972115Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1212994,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:33:26.154810001Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/hostname",
	        "HostsPath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/hosts",
	        "LogPath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf-json.log",
	        "Name": "/default-k8s-diff-port-236075",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-236075:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-236075",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf",
	                "LowerDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-236075",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-236075/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-236075",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-236075",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-236075",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3b9f59a92bb4e918e765bfce773a807c03827e1fd2c3c44710a2fae78e40d703",
	            "SandboxKey": "/var/run/docker/netns/3b9f59a92bb4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34517"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34518"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34521"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34519"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34520"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-236075": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:f2:f7:28:c4:30",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38f263a32d28f326bd7caf8b4f69506dbe3e875f124d60f1d6382480728769c0",
	                    "EndpointID": "dd991b37d1ec9283d24208074645fee654e329e8fb622e2a766a9c80a97a2f7d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-236075",
	                        "764db5e58d40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-236075 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-236075 logs -n 25: (1.21396106s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-731120 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-731120                │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-731120 sudo crio config                                                                                                                                                                                                             │ cilium-731120                │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │                     │
	│ delete  │ -p cilium-731120                                                                                                                                                                                                                              │ cilium-731120                │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p force-systemd-env-680693 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-680693     │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ delete  │ -p kubernetes-upgrade-666491                                                                                                                                                                                                                  │ kubernetes-upgrade-666491    │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p force-systemd-env-680693                                                                                                                                                                                                                   │ force-systemd-env-680693     │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p cert-options-517657 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:30 UTC │
	│ ssh     │ cert-options-517657 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ ssh     │ -p cert-options-517657 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-517657                                                                                                                                                                                                                        │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-171136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-171136 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │ 08 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-171136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ image   │ old-k8s-version-171136 image list --format=json                                                                                                                                                                                               │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-171136 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-837698                                                                                                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-236075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:34:08
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:34:08.711554 1216426 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:34:08.711680 1216426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:34:08.711691 1216426 out.go:374] Setting ErrFile to fd 2...
	I1108 10:34:08.711696 1216426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:34:08.711934 1216426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:34:08.712427 1216426 out.go:368] Setting JSON to false
	I1108 10:34:08.713403 1216426 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33394,"bootTime":1762564655,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:34:08.713477 1216426 start.go:143] virtualization:  
	I1108 10:34:08.717238 1216426 out.go:179] * [embed-certs-790346] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:34:08.721886 1216426 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:34:08.721937 1216426 notify.go:221] Checking for updates...
	I1108 10:34:08.728298 1216426 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:34:08.731536 1216426 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:34:08.735174 1216426 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:34:08.738234 1216426 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:34:08.741206 1216426 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:34:08.744843 1216426 config.go:182] Loaded profile config "default-k8s-diff-port-236075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:34:08.744963 1216426 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:34:08.772004 1216426 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:34:08.772135 1216426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:34:08.835880 1216426 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:34:08.825926806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:34:08.835978 1216426 docker.go:319] overlay module found
	I1108 10:34:08.839165 1216426 out.go:179] * Using the docker driver based on user configuration
	I1108 10:34:08.842066 1216426 start.go:309] selected driver: docker
	I1108 10:34:08.842089 1216426 start.go:930] validating driver "docker" against <nil>
	I1108 10:34:08.842103 1216426 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:34:08.842870 1216426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:34:08.906886 1216426 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:34:08.892658202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:34:08.907049 1216426 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:34:08.907275 1216426 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:34:08.910267 1216426 out.go:179] * Using Docker driver with root privileges
	I1108 10:34:08.913151 1216426 cni.go:84] Creating CNI manager for ""
	I1108 10:34:08.913227 1216426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:34:08.913241 1216426 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:34:08.913369 1216426 start.go:353] cluster config:
	{Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:34:08.916505 1216426 out.go:179] * Starting "embed-certs-790346" primary control-plane node in "embed-certs-790346" cluster
	I1108 10:34:08.919335 1216426 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:34:08.922142 1216426 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:34:08.925017 1216426 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:34:08.925081 1216426 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:34:08.925094 1216426 cache.go:59] Caching tarball of preloaded images
	I1108 10:34:08.925106 1216426 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:34:08.925176 1216426 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:34:08.925187 1216426 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:34:08.925300 1216426 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/config.json ...
	I1108 10:34:08.925316 1216426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/config.json: {Name:mk0a3fb0bba461603e173b5402ce0f8db5a1addf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:34:08.944391 1216426 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:34:08.944414 1216426 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:34:08.944428 1216426 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:34:08.944484 1216426 start.go:360] acquireMachinesLock for embed-certs-790346: {Name:mka3c0f23b810acc7356b6e9fd36989eb99bdea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:34:08.944589 1216426 start.go:364] duration metric: took 85.717µs to acquireMachinesLock for "embed-certs-790346"
	I1108 10:34:08.944621 1216426 start.go:93] Provisioning new machine with config: &{Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:34:08.944690 1216426 start.go:125] createHost starting for "" (driver="docker")
	W1108 10:34:07.328392 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	W1108 10:34:09.329134 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	I1108 10:34:08.947995 1216426 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:34:08.948236 1216426 start.go:159] libmachine.API.Create for "embed-certs-790346" (driver="docker")
	I1108 10:34:08.948282 1216426 client.go:173] LocalClient.Create starting
	I1108 10:34:08.948361 1216426 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem
	I1108 10:34:08.948396 1216426 main.go:143] libmachine: Decoding PEM data...
	I1108 10:34:08.948413 1216426 main.go:143] libmachine: Parsing certificate...
	I1108 10:34:08.948497 1216426 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem
	I1108 10:34:08.948523 1216426 main.go:143] libmachine: Decoding PEM data...
	I1108 10:34:08.948537 1216426 main.go:143] libmachine: Parsing certificate...
	I1108 10:34:08.948915 1216426 cli_runner.go:164] Run: docker network inspect embed-certs-790346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:34:08.966549 1216426 cli_runner.go:211] docker network inspect embed-certs-790346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:34:08.966634 1216426 network_create.go:284] running [docker network inspect embed-certs-790346] to gather additional debugging logs...
	I1108 10:34:08.966657 1216426 cli_runner.go:164] Run: docker network inspect embed-certs-790346
	W1108 10:34:08.983007 1216426 cli_runner.go:211] docker network inspect embed-certs-790346 returned with exit code 1
	I1108 10:34:08.983042 1216426 network_create.go:287] error running [docker network inspect embed-certs-790346]: docker network inspect embed-certs-790346: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-790346 not found
	I1108 10:34:08.983055 1216426 network_create.go:289] output of [docker network inspect embed-certs-790346]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-790346 not found
	
	** /stderr **
	I1108 10:34:08.983167 1216426 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:34:08.999583 1216426 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f127b1978c3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:c7:37:65:8c:96} reservation:<nil>}
	I1108 10:34:08.999895 1216426 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b98bf73d2e94 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:99:be:46:ea:86} reservation:<nil>}
	I1108 10:34:09.000221 1216426 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c4df73992be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:ad:c1:c0:ea:6d} reservation:<nil>}
	I1108 10:34:09.000704 1216426 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a026e0}
	I1108 10:34:09.000731 1216426 network_create.go:124] attempt to create docker network embed-certs-790346 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 10:34:09.000787 1216426 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-790346 embed-certs-790346
	I1108 10:34:09.074979 1216426 network_create.go:108] docker network embed-certs-790346 192.168.76.0/24 created
	I1108 10:34:09.075016 1216426 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-790346" container
	I1108 10:34:09.075091 1216426 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:34:09.093980 1216426 cli_runner.go:164] Run: docker volume create embed-certs-790346 --label name.minikube.sigs.k8s.io=embed-certs-790346 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:34:09.115015 1216426 oci.go:103] Successfully created a docker volume embed-certs-790346
	I1108 10:34:09.115101 1216426 cli_runner.go:164] Run: docker run --rm --name embed-certs-790346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-790346 --entrypoint /usr/bin/test -v embed-certs-790346:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:34:09.655452 1216426 oci.go:107] Successfully prepared a docker volume embed-certs-790346
	I1108 10:34:09.655517 1216426 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:34:09.655540 1216426 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:34:09.655615 1216426 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-790346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 10:34:11.829987 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	W1108 10:34:14.328875 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	I1108 10:34:14.089773 1216426 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-790346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.434104996s)
	I1108 10:34:14.089810 1216426 kic.go:203] duration metric: took 4.434265984s to extract preloaded images to volume ...
	W1108 10:34:14.089945 1216426 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:34:14.090057 1216426 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:34:14.145553 1216426 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-790346 --name embed-certs-790346 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-790346 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-790346 --network embed-certs-790346 --ip 192.168.76.2 --volume embed-certs-790346:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:34:14.454593 1216426 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Running}}
	I1108 10:34:14.482061 1216426 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:34:14.510451 1216426 cli_runner.go:164] Run: docker exec embed-certs-790346 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:34:14.571402 1216426 oci.go:144] the created container "embed-certs-790346" has a running status.
	I1108 10:34:14.571432 1216426 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa...
	I1108 10:34:15.050270 1216426 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:34:15.071466 1216426 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:34:15.091803 1216426 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:34:15.091825 1216426 kic_runner.go:114] Args: [docker exec --privileged embed-certs-790346 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:34:15.135480 1216426 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:34:15.154553 1216426 machine.go:94] provisionDockerMachine start ...
	I1108 10:34:15.154651 1216426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:34:15.174656 1216426 main.go:143] libmachine: Using SSH client type: native
	I1108 10:34:15.175018 1216426 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34522 <nil> <nil>}
	I1108 10:34:15.175029 1216426 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:34:15.175729 1216426 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46546->127.0.0.1:34522: read: connection reset by peer
	I1108 10:34:18.329148 1216426 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790346
	
	I1108 10:34:18.329187 1216426 ubuntu.go:182] provisioning hostname "embed-certs-790346"
	I1108 10:34:18.329261 1216426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:34:18.345952 1216426 main.go:143] libmachine: Using SSH client type: native
	I1108 10:34:18.346262 1216426 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34522 <nil> <nil>}
	I1108 10:34:18.346278 1216426 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-790346 && echo "embed-certs-790346" | sudo tee /etc/hostname
	I1108 10:34:18.506456 1216426 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790346
	
	I1108 10:34:18.506567 1216426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:34:18.524233 1216426 main.go:143] libmachine: Using SSH client type: native
	I1108 10:34:18.524596 1216426 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34522 <nil> <nil>}
	I1108 10:34:18.524624 1216426 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-790346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-790346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-790346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:34:18.676633 1216426 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:34:18.676658 1216426 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:34:18.676700 1216426 ubuntu.go:190] setting up certificates
	I1108 10:34:18.676715 1216426 provision.go:84] configureAuth start
	I1108 10:34:18.676788 1216426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790346
	I1108 10:34:18.693971 1216426 provision.go:143] copyHostCerts
	I1108 10:34:18.694030 1216426 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:34:18.694039 1216426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:34:18.694167 1216426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:34:18.694296 1216426 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:34:18.694310 1216426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:34:18.694343 1216426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:34:18.694411 1216426 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:34:18.694421 1216426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:34:18.694451 1216426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:34:18.694513 1216426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.embed-certs-790346 san=[127.0.0.1 192.168.76.2 embed-certs-790346 localhost minikube]
	W1108 10:34:16.828969 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	W1108 10:34:18.830518 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	I1108 10:34:19.467667 1216426 provision.go:177] copyRemoteCerts
	I1108 10:34:19.467742 1216426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:34:19.467789 1216426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:34:19.487440 1216426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34522 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:34:19.592259 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:34:19.609963 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:34:19.629414 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:34:19.646857 1216426 provision.go:87] duration metric: took 970.125641ms to configureAuth
	I1108 10:34:19.646881 1216426 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:34:19.647092 1216426 config.go:182] Loaded profile config "embed-certs-790346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:34:19.647201 1216426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:34:19.664412 1216426 main.go:143] libmachine: Using SSH client type: native
	I1108 10:34:19.664760 1216426 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34522 <nil> <nil>}
	I1108 10:34:19.664779 1216426 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:34:19.946419 1216426 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:34:19.946483 1216426 machine.go:97] duration metric: took 4.791908693s to provisionDockerMachine
	I1108 10:34:19.946508 1216426 client.go:176] duration metric: took 10.998213751s to LocalClient.Create
	I1108 10:34:19.946567 1216426 start.go:167] duration metric: took 10.998332279s to libmachine.API.Create "embed-certs-790346"
	I1108 10:34:19.946595 1216426 start.go:293] postStartSetup for "embed-certs-790346" (driver="docker")
	I1108 10:34:19.946623 1216426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:34:19.946725 1216426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:34:19.946787 1216426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:34:19.965052 1216426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34522 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:34:20.077235 1216426 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:34:20.080909 1216426 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:34:20.080993 1216426 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:34:20.081014 1216426 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:34:20.081099 1216426 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:34:20.081213 1216426 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:34:20.081323 1216426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:34:20.089578 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:34:20.109600 1216426 start.go:296] duration metric: took 162.971987ms for postStartSetup
	I1108 10:34:20.110044 1216426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790346
	I1108 10:34:20.129870 1216426 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/config.json ...
	I1108 10:34:20.130157 1216426 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:34:20.130198 1216426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:34:20.148646 1216426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34522 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:34:20.251955 1216426 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:34:20.257285 1216426 start.go:128] duration metric: took 11.31257846s to createHost
	I1108 10:34:20.257308 1216426 start.go:83] releasing machines lock for "embed-certs-790346", held for 11.312704897s
	I1108 10:34:20.257389 1216426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790346
	I1108 10:34:20.277791 1216426 ssh_runner.go:195] Run: cat /version.json
	I1108 10:34:20.277842 1216426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:34:20.277864 1216426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:34:20.277941 1216426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:34:20.299296 1216426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34522 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:34:20.308062 1216426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34522 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:34:20.501797 1216426 ssh_runner.go:195] Run: systemctl --version
	I1108 10:34:20.508432 1216426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:34:20.551141 1216426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:34:20.555963 1216426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:34:20.556099 1216426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:34:20.586222 1216426 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:34:20.586248 1216426 start.go:496] detecting cgroup driver to use...
	I1108 10:34:20.586286 1216426 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:34:20.586337 1216426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:34:20.605484 1216426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:34:20.619076 1216426 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:34:20.619172 1216426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:34:20.638202 1216426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:34:20.657008 1216426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:34:20.782932 1216426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:34:20.930182 1216426 docker.go:234] disabling docker service ...
	I1108 10:34:20.930273 1216426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:34:20.952537 1216426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:34:20.967352 1216426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:34:21.102036 1216426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:34:21.222225 1216426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:34:21.237347 1216426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:34:21.253536 1216426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:34:21.253607 1216426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:34:21.264971 1216426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:34:21.265038 1216426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:34:21.274503 1216426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:34:21.283519 1216426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:34:21.292430 1216426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:34:21.300619 1216426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:34:21.309944 1216426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:34:21.323387 1216426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:34:21.333971 1216426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:34:21.341425 1216426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:34:21.349137 1216426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:34:21.465974 1216426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:34:21.597098 1216426 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:34:21.597178 1216426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:34:21.601046 1216426 start.go:564] Will wait 60s for crictl version
	I1108 10:34:21.601120 1216426 ssh_runner.go:195] Run: which crictl
	I1108 10:34:21.604863 1216426 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:34:21.635895 1216426 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:34:21.635993 1216426 ssh_runner.go:195] Run: crio --version
	I1108 10:34:21.668491 1216426 ssh_runner.go:195] Run: crio --version
	I1108 10:34:21.699684 1216426 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:34:21.702607 1216426 cli_runner.go:164] Run: docker network inspect embed-certs-790346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:34:21.718683 1216426 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:34:21.722588 1216426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:34:21.732612 1216426 kubeadm.go:884] updating cluster {Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:34:21.732719 1216426 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:34:21.732772 1216426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:34:21.769179 1216426 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:34:21.769207 1216426 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:34:21.769268 1216426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:34:21.794294 1216426 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:34:21.794321 1216426 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:34:21.794330 1216426 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:34:21.794425 1216426 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-790346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:34:21.794511 1216426 ssh_runner.go:195] Run: crio config
	I1108 10:34:21.854711 1216426 cni.go:84] Creating CNI manager for ""
	I1108 10:34:21.854785 1216426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:34:21.854820 1216426 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:34:21.854875 1216426 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-790346 NodeName:embed-certs-790346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:34:21.855043 1216426 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-790346"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:34:21.855134 1216426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:34:21.863224 1216426 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:34:21.863293 1216426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:34:21.871685 1216426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 10:34:21.887531 1216426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:34:21.904161 1216426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1108 10:34:21.919231 1216426 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:34:21.923235 1216426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:34:21.933636 1216426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:34:22.065610 1216426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:34:22.085353 1216426 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346 for IP: 192.168.76.2
	I1108 10:34:22.085391 1216426 certs.go:195] generating shared ca certs ...
	I1108 10:34:22.085409 1216426 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:34:22.085605 1216426 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:34:22.085664 1216426 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:34:22.085686 1216426 certs.go:257] generating profile certs ...
	I1108 10:34:22.085760 1216426 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/client.key
	I1108 10:34:22.085778 1216426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/client.crt with IP's: []
	I1108 10:34:22.574969 1216426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/client.crt ...
	I1108 10:34:22.575010 1216426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/client.crt: {Name:mk36f632376b8a70193176b31e5eb7675c6d67b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:34:22.575218 1216426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/client.key ...
	I1108 10:34:22.575234 1216426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/client.key: {Name:mkd68c61448e66b8b29d7de796be19ab2705202d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:34:22.575330 1216426 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.key.f841e63b
	I1108 10:34:22.575348 1216426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.crt.f841e63b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 10:34:23.816004 1216426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.crt.f841e63b ...
	I1108 10:34:23.816036 1216426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.crt.f841e63b: {Name:mk3f10c162075d12b3652c5e0c108891e6f72ced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:34:23.816230 1216426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.key.f841e63b ...
	I1108 10:34:23.816247 1216426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.key.f841e63b: {Name:mk914f8518be2ccf965451584a77339b3ce366d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:34:23.816340 1216426 certs.go:382] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.crt.f841e63b -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.crt
	I1108 10:34:23.816417 1216426 certs.go:386] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.key.f841e63b -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.key
	I1108 10:34:23.816498 1216426 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.key
	I1108 10:34:23.816520 1216426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.crt with IP's: []
	I1108 10:34:24.032265 1216426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.crt ...
	I1108 10:34:24.032298 1216426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.crt: {Name:mkc2fb15ee8add39a2d264aa70200970239a353a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:34:24.032495 1216426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.key ...
	I1108 10:34:24.032511 1216426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.key: {Name:mk5c5208246ab827489a08770ebbec9149206fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:34:24.032712 1216426 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:34:24.032756 1216426 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:34:24.032770 1216426 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:34:24.032795 1216426 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:34:24.032823 1216426 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:34:24.032847 1216426 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:34:24.032891 1216426 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:34:24.033441 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:34:24.053964 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:34:24.072939 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:34:24.092108 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:34:24.110269 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1108 10:34:24.129247 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:34:24.147594 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:34:24.165296 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:34:24.182562 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:34:24.199968 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:34:24.217365 1216426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:34:24.236480 1216426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:34:24.249455 1216426 ssh_runner.go:195] Run: openssl version
	I1108 10:34:24.255749 1216426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:34:24.264382 1216426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:34:24.268136 1216426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:34:24.268206 1216426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:34:24.308932 1216426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:34:24.317245 1216426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:34:24.326030 1216426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:34:24.330777 1216426 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:34:24.330842 1216426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:34:24.373834 1216426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:34:24.382307 1216426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:34:24.390708 1216426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:34:24.394765 1216426 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:34:24.394868 1216426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:34:24.440645 1216426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:34:24.448782 1216426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:34:24.452199 1216426 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:34:24.452266 1216426 kubeadm.go:401] StartCluster: {Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:34:24.452348 1216426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:34:24.452415 1216426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:34:24.479635 1216426 cri.go:89] found id: ""
	I1108 10:34:24.479711 1216426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:34:24.487313 1216426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:34:24.494773 1216426 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:34:24.494854 1216426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:34:24.502428 1216426 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:34:24.502449 1216426 kubeadm.go:158] found existing configuration files:
	
	I1108 10:34:24.502504 1216426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:34:24.510168 1216426 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:34:24.510233 1216426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:34:24.517386 1216426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:34:24.524909 1216426 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:34:24.525004 1216426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:34:24.532412 1216426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:34:24.540261 1216426 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:34:24.540346 1216426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:34:24.547473 1216426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:34:24.554819 1216426 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:34:24.554883 1216426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:34:24.562320 1216426 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:34:24.601869 1216426 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:34:24.601932 1216426 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:34:24.629672 1216426 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:34:24.629753 1216426 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:34:24.629796 1216426 kubeadm.go:319] OS: Linux
	I1108 10:34:24.629850 1216426 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:34:24.629906 1216426 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:34:24.629960 1216426 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:34:24.630017 1216426 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:34:24.630072 1216426 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:34:24.630126 1216426 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:34:24.630189 1216426 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:34:24.630244 1216426 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:34:24.630296 1216426 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:34:24.700064 1216426 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:34:24.700203 1216426 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:34:24.700324 1216426 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:34:24.708953 1216426 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1108 10:34:21.329209 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	W1108 10:34:23.329592 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	I1108 10:34:24.714853 1216426 out.go:252]   - Generating certificates and keys ...
	I1108 10:34:24.714958 1216426 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:34:24.715039 1216426 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:34:25.149835 1216426 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:34:25.755931 1216426 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:34:26.877107 1216426 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:34:27.906935 1216426 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:34:28.525255 1216426 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:34:28.525529 1216426 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-790346 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1108 10:34:25.829801 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	W1108 10:34:27.830392 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	W1108 10:34:30.328775 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	I1108 10:34:29.333411 1216426 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:34:29.333735 1216426 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-790346 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:34:29.697712 1216426 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:34:30.033182 1216426 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:34:30.271749 1216426 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:34:30.271856 1216426 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:34:30.666062 1216426 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:34:32.601930 1216426 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	W1108 10:34:32.328880 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	W1108 10:34:34.831636 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	I1108 10:34:34.908077 1216426 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:34:35.282224 1216426 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:34:35.701469 1216426 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:34:35.701592 1216426 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:34:35.703473 1216426 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:34:35.706907 1216426 out.go:252]   - Booting up control plane ...
	I1108 10:34:35.707032 1216426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:34:35.707134 1216426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:34:35.707243 1216426 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:34:35.724713 1216426 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:34:35.725137 1216426 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:34:35.733904 1216426 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:34:35.734266 1216426 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:34:35.734316 1216426 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:34:35.876023 1216426 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:34:35.876173 1216426 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:34:36.876815 1216426 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001687733s
	I1108 10:34:36.879650 1216426 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:34:36.879746 1216426 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 10:34:36.879840 1216426 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:34:36.879922 1216426 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1108 10:34:37.328532 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	W1108 10:34:39.329207 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	I1108 10:34:41.262715 1216426 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.382442591s
	I1108 10:34:41.653989 1216426 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.774334154s
	I1108 10:34:43.381529 1216426 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501613927s
	I1108 10:34:43.402592 1216426 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:34:43.421065 1216426 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:34:43.439141 1216426 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:34:43.439391 1216426 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-790346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:34:43.456231 1216426 kubeadm.go:319] [bootstrap-token] Using token: hxzqa8.3786fhny2r3bngvn
	I1108 10:34:43.459381 1216426 out.go:252]   - Configuring RBAC rules ...
	I1108 10:34:43.459594 1216426 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:34:43.468425 1216426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:34:43.477681 1216426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:34:43.482081 1216426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:34:43.486408 1216426 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:34:43.492896 1216426 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:34:43.788629 1216426 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:34:44.262479 1216426 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:34:44.788568 1216426 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:34:44.789692 1216426 kubeadm.go:319] 
	I1108 10:34:44.789786 1216426 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:34:44.789797 1216426 kubeadm.go:319] 
	I1108 10:34:44.789879 1216426 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:34:44.789889 1216426 kubeadm.go:319] 
	I1108 10:34:44.789916 1216426 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:34:44.789982 1216426 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:34:44.790040 1216426 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:34:44.790047 1216426 kubeadm.go:319] 
	I1108 10:34:44.790132 1216426 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:34:44.790146 1216426 kubeadm.go:319] 
	I1108 10:34:44.790206 1216426 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:34:44.790221 1216426 kubeadm.go:319] 
	I1108 10:34:44.790278 1216426 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:34:44.790377 1216426 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:34:44.790465 1216426 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:34:44.790473 1216426 kubeadm.go:319] 
	I1108 10:34:44.790586 1216426 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:34:44.790677 1216426 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:34:44.790689 1216426 kubeadm.go:319] 
	I1108 10:34:44.790780 1216426 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hxzqa8.3786fhny2r3bngvn \
	I1108 10:34:44.790895 1216426 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 \
	I1108 10:34:44.790921 1216426 kubeadm.go:319] 	--control-plane 
	I1108 10:34:44.790931 1216426 kubeadm.go:319] 
	I1108 10:34:44.791021 1216426 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:34:44.791030 1216426 kubeadm.go:319] 
	I1108 10:34:44.791116 1216426 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hxzqa8.3786fhny2r3bngvn \
	I1108 10:34:44.791235 1216426 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 
	I1108 10:34:44.795727 1216426 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:34:44.795996 1216426 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:34:44.796119 1216426 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 10:34:44.796153 1216426 cni.go:84] Creating CNI manager for ""
	I1108 10:34:44.796167 1216426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:34:44.799316 1216426 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1108 10:34:41.828147 1212611 node_ready.go:57] node "default-k8s-diff-port-236075" has "Ready":"False" status (will retry)
	I1108 10:34:42.843227 1212611 node_ready.go:49] node "default-k8s-diff-port-236075" is "Ready"
	I1108 10:34:42.843260 1212611 node_ready.go:38] duration metric: took 39.517994495s for node "default-k8s-diff-port-236075" to be "Ready" ...
	I1108 10:34:42.843274 1212611 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:34:42.843338 1212611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:34:42.869766 1212611 api_server.go:72] duration metric: took 41.795093774s to wait for apiserver process to appear ...
	I1108 10:34:42.869793 1212611 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:34:42.869812 1212611 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1108 10:34:42.879875 1212611 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1108 10:34:42.880987 1212611 api_server.go:141] control plane version: v1.34.1
	I1108 10:34:42.881015 1212611 api_server.go:131] duration metric: took 11.215142ms to wait for apiserver health ...
	I1108 10:34:42.881025 1212611 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:34:42.884326 1212611 system_pods.go:59] 8 kube-system pods found
	I1108 10:34:42.884362 1212611 system_pods.go:61] "coredns-66bc5c9577-x99cj" [0a37e11d-012b-43a6-bdfb-eed3dee25c16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:34:42.884369 1212611 system_pods.go:61] "etcd-default-k8s-diff-port-236075" [48a515c0-6a89-4cf7-b22c-c3cdaafc02fa] Running
	I1108 10:34:42.884374 1212611 system_pods.go:61] "kindnet-7jcpv" [1bdac5f1-b816-4d00-96e9-334a4c83aaf5] Running
	I1108 10:34:42.884379 1212611 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-236075" [fb4ba6c7-7d01-4104-821b-34e65780d496] Running
	I1108 10:34:42.884384 1212611 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-236075" [c6cfe97d-ff1c-4910-958c-73e97e1f9944] Running
	I1108 10:34:42.884389 1212611 system_pods.go:61] "kube-proxy-rtchk" [3f2268ef-7cb4-455a-a158-38a4a9fed026] Running
	I1108 10:34:42.884394 1212611 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-236075" [12c9dd77-62d2-48a8-8296-bdfec4ca2b99] Running
	I1108 10:34:42.884400 1212611 system_pods.go:61] "storage-provisioner" [cda5c093-f604-49d1-90ad-770da6575a3e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:34:42.884412 1212611 system_pods.go:74] duration metric: took 3.379852ms to wait for pod list to return data ...
	I1108 10:34:42.884434 1212611 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:34:42.886919 1212611 default_sa.go:45] found service account: "default"
	I1108 10:34:42.886945 1212611 default_sa.go:55] duration metric: took 2.436414ms for default service account to be created ...
	I1108 10:34:42.886954 1212611 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:34:42.890133 1212611 system_pods.go:86] 8 kube-system pods found
	I1108 10:34:42.890169 1212611 system_pods.go:89] "coredns-66bc5c9577-x99cj" [0a37e11d-012b-43a6-bdfb-eed3dee25c16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:34:42.890177 1212611 system_pods.go:89] "etcd-default-k8s-diff-port-236075" [48a515c0-6a89-4cf7-b22c-c3cdaafc02fa] Running
	I1108 10:34:42.890184 1212611 system_pods.go:89] "kindnet-7jcpv" [1bdac5f1-b816-4d00-96e9-334a4c83aaf5] Running
	I1108 10:34:42.890194 1212611 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-236075" [fb4ba6c7-7d01-4104-821b-34e65780d496] Running
	I1108 10:34:42.890199 1212611 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-236075" [c6cfe97d-ff1c-4910-958c-73e97e1f9944] Running
	I1108 10:34:42.890203 1212611 system_pods.go:89] "kube-proxy-rtchk" [3f2268ef-7cb4-455a-a158-38a4a9fed026] Running
	I1108 10:34:42.890214 1212611 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-236075" [12c9dd77-62d2-48a8-8296-bdfec4ca2b99] Running
	I1108 10:34:42.890224 1212611 system_pods.go:89] "storage-provisioner" [cda5c093-f604-49d1-90ad-770da6575a3e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:34:42.890252 1212611 retry.go:31] will retry after 280.927211ms: missing components: kube-dns
	I1108 10:34:43.175845 1212611 system_pods.go:86] 8 kube-system pods found
	I1108 10:34:43.175932 1212611 system_pods.go:89] "coredns-66bc5c9577-x99cj" [0a37e11d-012b-43a6-bdfb-eed3dee25c16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:34:43.175961 1212611 system_pods.go:89] "etcd-default-k8s-diff-port-236075" [48a515c0-6a89-4cf7-b22c-c3cdaafc02fa] Running
	I1108 10:34:43.176007 1212611 system_pods.go:89] "kindnet-7jcpv" [1bdac5f1-b816-4d00-96e9-334a4c83aaf5] Running
	I1108 10:34:43.176035 1212611 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-236075" [fb4ba6c7-7d01-4104-821b-34e65780d496] Running
	I1108 10:34:43.176060 1212611 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-236075" [c6cfe97d-ff1c-4910-958c-73e97e1f9944] Running
	I1108 10:34:43.176087 1212611 system_pods.go:89] "kube-proxy-rtchk" [3f2268ef-7cb4-455a-a158-38a4a9fed026] Running
	I1108 10:34:43.176121 1212611 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-236075" [12c9dd77-62d2-48a8-8296-bdfec4ca2b99] Running
	I1108 10:34:43.176153 1212611 system_pods.go:89] "storage-provisioner" [cda5c093-f604-49d1-90ad-770da6575a3e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:34:43.176190 1212611 retry.go:31] will retry after 319.079056ms: missing components: kube-dns
	I1108 10:34:43.499419 1212611 system_pods.go:86] 8 kube-system pods found
	I1108 10:34:43.499455 1212611 system_pods.go:89] "coredns-66bc5c9577-x99cj" [0a37e11d-012b-43a6-bdfb-eed3dee25c16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:34:43.499463 1212611 system_pods.go:89] "etcd-default-k8s-diff-port-236075" [48a515c0-6a89-4cf7-b22c-c3cdaafc02fa] Running
	I1108 10:34:43.499471 1212611 system_pods.go:89] "kindnet-7jcpv" [1bdac5f1-b816-4d00-96e9-334a4c83aaf5] Running
	I1108 10:34:43.499476 1212611 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-236075" [fb4ba6c7-7d01-4104-821b-34e65780d496] Running
	I1108 10:34:43.499480 1212611 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-236075" [c6cfe97d-ff1c-4910-958c-73e97e1f9944] Running
	I1108 10:34:43.499484 1212611 system_pods.go:89] "kube-proxy-rtchk" [3f2268ef-7cb4-455a-a158-38a4a9fed026] Running
	I1108 10:34:43.499489 1212611 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-236075" [12c9dd77-62d2-48a8-8296-bdfec4ca2b99] Running
	I1108 10:34:43.499494 1212611 system_pods.go:89] "storage-provisioner" [cda5c093-f604-49d1-90ad-770da6575a3e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:34:43.499510 1212611 retry.go:31] will retry after 474.255874ms: missing components: kube-dns
	I1108 10:34:43.977974 1212611 system_pods.go:86] 8 kube-system pods found
	I1108 10:34:43.978005 1212611 system_pods.go:89] "coredns-66bc5c9577-x99cj" [0a37e11d-012b-43a6-bdfb-eed3dee25c16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:34:43.978013 1212611 system_pods.go:89] "etcd-default-k8s-diff-port-236075" [48a515c0-6a89-4cf7-b22c-c3cdaafc02fa] Running
	I1108 10:34:43.978019 1212611 system_pods.go:89] "kindnet-7jcpv" [1bdac5f1-b816-4d00-96e9-334a4c83aaf5] Running
	I1108 10:34:43.978023 1212611 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-236075" [fb4ba6c7-7d01-4104-821b-34e65780d496] Running
	I1108 10:34:43.978028 1212611 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-236075" [c6cfe97d-ff1c-4910-958c-73e97e1f9944] Running
	I1108 10:34:43.978033 1212611 system_pods.go:89] "kube-proxy-rtchk" [3f2268ef-7cb4-455a-a158-38a4a9fed026] Running
	I1108 10:34:43.978038 1212611 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-236075" [12c9dd77-62d2-48a8-8296-bdfec4ca2b99] Running
	I1108 10:34:43.978043 1212611 system_pods.go:89] "storage-provisioner" [cda5c093-f604-49d1-90ad-770da6575a3e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:34:43.978063 1212611 retry.go:31] will retry after 570.638606ms: missing components: kube-dns
	I1108 10:34:44.553673 1212611 system_pods.go:86] 8 kube-system pods found
	I1108 10:34:44.553705 1212611 system_pods.go:89] "coredns-66bc5c9577-x99cj" [0a37e11d-012b-43a6-bdfb-eed3dee25c16] Running
	I1108 10:34:44.553714 1212611 system_pods.go:89] "etcd-default-k8s-diff-port-236075" [48a515c0-6a89-4cf7-b22c-c3cdaafc02fa] Running
	I1108 10:34:44.553720 1212611 system_pods.go:89] "kindnet-7jcpv" [1bdac5f1-b816-4d00-96e9-334a4c83aaf5] Running
	I1108 10:34:44.553725 1212611 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-236075" [fb4ba6c7-7d01-4104-821b-34e65780d496] Running
	I1108 10:34:44.553729 1212611 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-236075" [c6cfe97d-ff1c-4910-958c-73e97e1f9944] Running
	I1108 10:34:44.553733 1212611 system_pods.go:89] "kube-proxy-rtchk" [3f2268ef-7cb4-455a-a158-38a4a9fed026] Running
	I1108 10:34:44.553737 1212611 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-236075" [12c9dd77-62d2-48a8-8296-bdfec4ca2b99] Running
	I1108 10:34:44.553743 1212611 system_pods.go:89] "storage-provisioner" [cda5c093-f604-49d1-90ad-770da6575a3e] Running
	I1108 10:34:44.553751 1212611 system_pods.go:126] duration metric: took 1.666789933s to wait for k8s-apps to be running ...
	I1108 10:34:44.553789 1212611 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:34:44.553861 1212611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:34:44.567998 1212611 system_svc.go:56] duration metric: took 14.200265ms WaitForService to wait for kubelet
	I1108 10:34:44.568027 1212611 kubeadm.go:587] duration metric: took 43.493360837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:34:44.568047 1212611 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:34:44.571367 1212611 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:34:44.571443 1212611 node_conditions.go:123] node cpu capacity is 2
	I1108 10:34:44.571471 1212611 node_conditions.go:105] duration metric: took 3.417791ms to run NodePressure ...
	I1108 10:34:44.571512 1212611 start.go:242] waiting for startup goroutines ...
	I1108 10:34:44.571538 1212611 start.go:247] waiting for cluster config update ...
	I1108 10:34:44.571568 1212611 start.go:256] writing updated cluster config ...
	I1108 10:34:44.571962 1212611 ssh_runner.go:195] Run: rm -f paused
	I1108 10:34:44.575520 1212611 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:34:44.581511 1212611 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x99cj" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:44.589540 1212611 pod_ready.go:94] pod "coredns-66bc5c9577-x99cj" is "Ready"
	I1108 10:34:44.589570 1212611 pod_ready.go:86] duration metric: took 8.030401ms for pod "coredns-66bc5c9577-x99cj" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:44.592100 1212611 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:44.597142 1212611 pod_ready.go:94] pod "etcd-default-k8s-diff-port-236075" is "Ready"
	I1108 10:34:44.597180 1212611 pod_ready.go:86] duration metric: took 5.055501ms for pod "etcd-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:44.599690 1212611 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:44.604597 1212611 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-236075" is "Ready"
	I1108 10:34:44.604624 1212611 pod_ready.go:86] duration metric: took 4.907551ms for pod "kube-apiserver-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:44.606954 1212611 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:44.979709 1212611 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-236075" is "Ready"
	I1108 10:34:44.979739 1212611 pod_ready.go:86] duration metric: took 372.761387ms for pod "kube-controller-manager-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:45.211633 1212611 pod_ready.go:83] waiting for pod "kube-proxy-rtchk" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:45.579484 1212611 pod_ready.go:94] pod "kube-proxy-rtchk" is "Ready"
	I1108 10:34:45.579576 1212611 pod_ready.go:86] duration metric: took 367.899054ms for pod "kube-proxy-rtchk" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:45.780280 1212611 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:46.179048 1212611 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-236075" is "Ready"
	I1108 10:34:46.179078 1212611 pod_ready.go:86] duration metric: took 398.771165ms for pod "kube-scheduler-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:34:46.179092 1212611 pod_ready.go:40] duration metric: took 1.603494183s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:34:46.242834 1212611 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:34:46.245966 1212611 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-236075" cluster and "default" namespace by default
	I1108 10:34:44.802315 1216426 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:34:44.806249 1216426 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 10:34:44.806268 1216426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:34:44.836001 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:34:45.305016 1216426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:34:45.305185 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:34:45.305272 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-790346 minikube.k8s.io/updated_at=2025_11_08T10_34_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=embed-certs-790346 minikube.k8s.io/primary=true
	I1108 10:34:45.342393 1216426 ops.go:34] apiserver oom_adj: -16
	I1108 10:34:45.569885 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:34:46.069943 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:34:46.570705 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:34:47.070536 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:34:47.570504 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:34:48.070553 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:34:48.570631 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:34:49.070034 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:34:49.570150 1216426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:34:49.704489 1216426 kubeadm.go:1114] duration metric: took 4.399358124s to wait for elevateKubeSystemPrivileges
	I1108 10:34:49.704517 1216426 kubeadm.go:403] duration metric: took 25.252272013s to StartCluster
	I1108 10:34:49.704536 1216426 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:34:49.704600 1216426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:34:49.706667 1216426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:34:49.707036 1216426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:34:49.707283 1216426 config.go:182] Loaded profile config "embed-certs-790346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:34:49.707349 1216426 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:34:49.707401 1216426 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:34:49.707556 1216426 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-790346"
	I1108 10:34:49.707572 1216426 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-790346"
	I1108 10:34:49.707597 1216426 host.go:66] Checking if "embed-certs-790346" exists ...
	I1108 10:34:49.708101 1216426 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:34:49.708591 1216426 addons.go:70] Setting default-storageclass=true in profile "embed-certs-790346"
	I1108 10:34:49.708618 1216426 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-790346"
	I1108 10:34:49.708914 1216426 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:34:49.713481 1216426 out.go:179] * Verifying Kubernetes components...
	I1108 10:34:49.718706 1216426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:34:49.741986 1216426 addons.go:239] Setting addon default-storageclass=true in "embed-certs-790346"
	I1108 10:34:49.742027 1216426 host.go:66] Checking if "embed-certs-790346" exists ...
	I1108 10:34:49.742472 1216426 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:34:49.753106 1216426 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:34:49.755977 1216426 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:34:49.756002 1216426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:34:49.756068 1216426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:34:49.785036 1216426 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:34:49.785057 1216426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:34:49.785118 1216426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:34:49.797620 1216426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34522 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:34:49.820000 1216426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34522 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:34:50.121967 1216426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:34:50.143791 1216426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:34:50.143905 1216426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:34:50.251107 1216426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:34:50.956118 1216426 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 10:34:50.958119 1216426 node_ready.go:35] waiting up to 6m0s for node "embed-certs-790346" to be "Ready" ...
	I1108 10:34:50.996548 1216426 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 10:34:50.999577 1216426 addons.go:515] duration metric: took 1.292154784s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 10:34:51.460288 1216426 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-790346" context rescaled to 1 replicas
	W1108 10:34:52.961004 1216426 node_ready.go:57] node "embed-certs-790346" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 08 10:34:43 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:43.260276152Z" level=info msg="Created container a6760cbbf3241f9ab4cccc147d2e04e6c6cab971ac4f29138dab7e515f2b3da6: kube-system/coredns-66bc5c9577-x99cj/coredns" id=1184bd78-5210-4f70-92da-fa9cd9fe9f18 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:34:43 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:43.261458852Z" level=info msg="Starting container: a6760cbbf3241f9ab4cccc147d2e04e6c6cab971ac4f29138dab7e515f2b3da6" id=e437ecbe-9838-47a3-882d-0f116501d47c name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:34:43 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:43.270158419Z" level=info msg="Started container" PID=1746 containerID=a6760cbbf3241f9ab4cccc147d2e04e6c6cab971ac4f29138dab7e515f2b3da6 description=kube-system/coredns-66bc5c9577-x99cj/coredns id=e437ecbe-9838-47a3-882d-0f116501d47c name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ba11c1787b981cfa8275efb91db44f40609e2053adf3e94a9ee86c286fbbc84
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.786236176Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c9d3ceda-743b-4b65-9c8b-47c68696f936 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.78631331Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.792147619Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:78ace024939eac0753947c0d3ec6d7833863b1b9a359f628e522ac542899d4c7 UID:f71e109f-b88f-4781-b4ac-aaabd22ff178 NetNS:/var/run/netns/5df9485f-67e6-489d-8855-42c97756ccd6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078dd0}] Aliases:map[]}"
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.792317189Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.803094248Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:78ace024939eac0753947c0d3ec6d7833863b1b9a359f628e522ac542899d4c7 UID:f71e109f-b88f-4781-b4ac-aaabd22ff178 NetNS:/var/run/netns/5df9485f-67e6-489d-8855-42c97756ccd6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078dd0}] Aliases:map[]}"
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.803249057Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.806311299Z" level=info msg="Ran pod sandbox 78ace024939eac0753947c0d3ec6d7833863b1b9a359f628e522ac542899d4c7 with infra container: default/busybox/POD" id=c9d3ceda-743b-4b65-9c8b-47c68696f936 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.809948948Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=86c23f1b-f07c-489b-8e30-9154ffc1e10e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.810078601Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=86c23f1b-f07c-489b-8e30-9154ffc1e10e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.810118338Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=86c23f1b-f07c-489b-8e30-9154ffc1e10e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.811471962Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9b74fc15-f071-48d5-947f-f01a7138a136 name=/runtime.v1.ImageService/PullImage
	Nov 08 10:34:46 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:46.813251797Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 10:34:48 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:48.797192458Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9b74fc15-f071-48d5-947f-f01a7138a136 name=/runtime.v1.ImageService/PullImage
	Nov 08 10:34:48 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:48.797849103Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cdae2697-a064-461f-8b19-7186466a5598 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:34:48 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:48.80113569Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=93bf4999-0218-4b40-a44d-4912754682da name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:34:48 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:48.807796279Z" level=info msg="Creating container: default/busybox/busybox" id=3f25cecb-4082-4e75-935f-88215134d02c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:34:48 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:48.807922585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:34:48 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:48.812745084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:34:48 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:48.813220229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:34:48 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:48.832937906Z" level=info msg="Created container 68b4d627bbae26f00ac24bfea2e7d3a074e0118c9f5b338d8288d78b9a53233e: default/busybox/busybox" id=3f25cecb-4082-4e75-935f-88215134d02c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:34:48 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:48.838278363Z" level=info msg="Starting container: 68b4d627bbae26f00ac24bfea2e7d3a074e0118c9f5b338d8288d78b9a53233e" id=12480abc-3a11-4f14-bc5b-dd94ca08f90e name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:34:48 default-k8s-diff-port-236075 crio[838]: time="2025-11-08T10:34:48.840202095Z" level=info msg="Started container" PID=1801 containerID=68b4d627bbae26f00ac24bfea2e7d3a074e0118c9f5b338d8288d78b9a53233e description=default/busybox/busybox id=12480abc-3a11-4f14-bc5b-dd94ca08f90e name=/runtime.v1.RuntimeService/StartContainer sandboxID=78ace024939eac0753947c0d3ec6d7833863b1b9a359f628e522ac542899d4c7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	68b4d627bbae2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   78ace024939ea       busybox                                                default
	a6760cbbf3241       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   5ba11c1787b98       coredns-66bc5c9577-x99cj                               kube-system
	4653d4703b092       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   739e23df77987       storage-provisioner                                    kube-system
	211d6721841bb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   04ccdc7198c2c       kube-proxy-rtchk                                       kube-system
	c34a947f50171       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   2856ec1c27c35       kindnet-7jcpv                                          kube-system
	e174af2079f16       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   224690032f3f1       kube-controller-manager-default-k8s-diff-port-236075   kube-system
	bc964ac5d0fef       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   eb3150c53f9e7       kube-scheduler-default-k8s-diff-port-236075            kube-system
	016230bfd3a1f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   ef7e34e470c9e       etcd-default-k8s-diff-port-236075                      kube-system
	ef57459d3e109       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   4aed72b3ce55c       kube-apiserver-default-k8s-diff-port-236075            kube-system
	
	
	==> coredns [a6760cbbf3241f9ab4cccc147d2e04e6c6cab971ac4f29138dab7e515f2b3da6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47292 - 20232 "HINFO IN 3464757636287932972.2144570634894772375. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01727849s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-236075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-236075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=default-k8s-diff-port-236075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_33_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:33:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-236075
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:34:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:34:42 +0000   Sat, 08 Nov 2025 10:33:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:34:42 +0000   Sat, 08 Nov 2025 10:33:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:34:42 +0000   Sat, 08 Nov 2025 10:33:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:34:42 +0000   Sat, 08 Nov 2025 10:34:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-236075
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                70b29cae-e7bf-4dbe-8a30-22731e1a459a
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-x99cj                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-236075                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-7jcpv                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-236075             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-236075    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-rtchk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-236075             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 53s   kube-proxy       
	  Normal   Starting                 61s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s   kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s   kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s   kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s   node-controller  Node default-k8s-diff-port-236075 event: Registered Node default-k8s-diff-port-236075 in Controller
	  Normal   NodeReady                14s   kubelet          Node default-k8s-diff-port-236075 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 10:12] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[ +18.424643] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [016230bfd3a1f3a910e6a33c5cc9623c89324c7e09da024722caba0c78a22124] <==
	{"level":"warn","ts":"2025-11-08T10:33:50.024785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.042047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.084273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.089362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.107233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.125835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.138639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.160941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.179984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.192636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.208064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.224638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.245631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.264424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.276742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.292993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.309585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.327995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.353529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.372945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.392893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.421131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.441150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.462368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:33:50.555045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51858","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:34:56 up  9:17,  0 user,  load average: 4.56, 3.82, 3.01
	Linux default-k8s-diff-port-236075 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c34a947f5017163f58905b75541f8bc0701c50d812c27b8b76206282398060f8] <==
	I1108 10:34:01.976228       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:34:02.012718       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:34:02.012867       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:34:02.012879       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:34:02.012896       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:34:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:34:02.251933       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:34:02.251952       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:34:02.251961       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:34:02.252270       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:34:32.251750       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:34:32.251932       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:34:32.253256       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 10:34:32.253373       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1108 10:34:33.553094       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:34:33.553196       1 metrics.go:72] Registering metrics
	I1108 10:34:33.553299       1 controller.go:711] "Syncing nftables rules"
	I1108 10:34:42.260657       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:34:42.260792       1 main.go:301] handling current node
	I1108 10:34:52.252850       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:34:52.252964       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ef57459d3e1093da8978a0ba99421c8f16607bb3be99e65a34f995c57159cb97] <==
	I1108 10:33:51.753104       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1108 10:33:51.753182       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1108 10:33:51.753231       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 10:33:51.755442       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:33:51.771185       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:33:51.771256       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:33:51.966856       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:33:52.265515       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 10:33:52.275252       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 10:33:52.275277       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:33:53.378471       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:33:53.440084       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:33:53.581919       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 10:33:53.592328       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1108 10:33:53.593808       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:33:53.599855       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:33:54.578656       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:33:54.800483       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:33:54.849605       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 10:33:54.876970       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 10:34:00.297914       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:34:00.364383       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:34:00.543497       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:34:00.631234       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1108 10:34:54.605759       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:60250: use of closed network connection
	
	
	==> kube-controller-manager [e174af2079f168e6899036aef98010c8b92622bff6451dcf4b494b50e7888b0b] <==
	I1108 10:33:59.599077       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 10:33:59.599152       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:33:59.599179       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 10:33:59.599201       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 10:33:59.599306       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 10:33:59.599486       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:33:59.605381       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:33:59.606079       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:33:59.606273       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 10:33:59.606330       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 10:33:59.606359       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 10:33:59.606369       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 10:33:59.606374       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 10:33:59.610334       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:33:59.613612       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:33:59.616160       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:33:59.626779       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:33:59.627398       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 10:33:59.633998       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-236075" podCIDRs=["10.244.0.0/24"]
	I1108 10:33:59.642131       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:33:59.664286       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:33:59.709281       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:33:59.709313       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:33:59.709332       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:34:44.586401       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [211d6721841bbcf63f54dcf559d404bf269116aa94f4ed453f2e6e6d4ca17564] <==
	I1108 10:34:01.951792       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:34:02.196728       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:34:02.398969       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:34:02.399045       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:34:02.399137       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:34:02.487865       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:34:02.487915       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:34:02.500323       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:34:02.500678       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:34:02.500700       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:34:02.502132       1 config.go:200] "Starting service config controller"
	I1108 10:34:02.502145       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:34:02.511418       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:34:02.513467       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:34:02.513510       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:34:02.513523       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:34:02.514173       1 config.go:309] "Starting node config controller"
	I1108 10:34:02.514181       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:34:02.514187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:34:02.602551       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:34:02.614209       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:34:02.614244       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bc964ac5d0fefca2f8ed768b361a2d78d53efbc419ee8a09c6094bbbec561cb0] <==
	I1108 10:33:52.865109       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:33:52.872075       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:33:52.872458       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:33:52.872520       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:33:52.875675       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1108 10:33:52.879843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:33:52.885534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:33:52.885676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:33:52.885750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:33:52.885827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:33:52.896013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:33:52.896155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:33:52.896258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:33:52.896360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:33:52.896476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:33:52.896563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:33:52.896656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:33:52.896750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:33:52.896857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:33:52.896950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:33:52.897063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:33:52.897179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:33:52.897313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 10:33:52.897443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1108 10:33:54.379770       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:33:56 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:33:56.773032    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-236075" podStartSLOduration=0.773014664 podStartE2EDuration="773.014664ms" podCreationTimestamp="2025-11-08 10:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:33:56.735889018 +0000 UTC m=+1.988385678" watchObservedRunningTime="2025-11-08 10:33:56.773014664 +0000 UTC m=+2.025511332"
	Nov 08 10:33:56 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:33:56.821348    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-236075" podStartSLOduration=0.821328956 podStartE2EDuration="821.328956ms" podCreationTimestamp="2025-11-08 10:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:33:56.77668144 +0000 UTC m=+2.029178108" watchObservedRunningTime="2025-11-08 10:33:56.821328956 +0000 UTC m=+2.073825632"
	Nov 08 10:33:56 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:33:56.870180    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-236075" podStartSLOduration=0.870150653 podStartE2EDuration="870.150653ms" podCreationTimestamp="2025-11-08 10:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:33:56.821949614 +0000 UTC m=+2.074446273" watchObservedRunningTime="2025-11-08 10:33:56.870150653 +0000 UTC m=+2.122647329"
	Nov 08 10:33:59 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:33:59.680907    1320 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 10:33:59 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:33:59.681667    1320 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 10:34:00 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:00.856625    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1bdac5f1-b816-4d00-96e9-334a4c83aaf5-cni-cfg\") pod \"kindnet-7jcpv\" (UID: \"1bdac5f1-b816-4d00-96e9-334a4c83aaf5\") " pod="kube-system/kindnet-7jcpv"
	Nov 08 10:34:00 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:00.857213    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bdac5f1-b816-4d00-96e9-334a4c83aaf5-xtables-lock\") pod \"kindnet-7jcpv\" (UID: \"1bdac5f1-b816-4d00-96e9-334a4c83aaf5\") " pod="kube-system/kindnet-7jcpv"
	Nov 08 10:34:00 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:00.857341    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f2268ef-7cb4-455a-a158-38a4a9fed026-kube-proxy\") pod \"kube-proxy-rtchk\" (UID: \"3f2268ef-7cb4-455a-a158-38a4a9fed026\") " pod="kube-system/kube-proxy-rtchk"
	Nov 08 10:34:00 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:00.857462    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6dt4\" (UniqueName: \"kubernetes.io/projected/1bdac5f1-b816-4d00-96e9-334a4c83aaf5-kube-api-access-b6dt4\") pod \"kindnet-7jcpv\" (UID: \"1bdac5f1-b816-4d00-96e9-334a4c83aaf5\") " pod="kube-system/kindnet-7jcpv"
	Nov 08 10:34:00 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:00.857569    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbg8m\" (UniqueName: \"kubernetes.io/projected/3f2268ef-7cb4-455a-a158-38a4a9fed026-kube-api-access-nbg8m\") pod \"kube-proxy-rtchk\" (UID: \"3f2268ef-7cb4-455a-a158-38a4a9fed026\") " pod="kube-system/kube-proxy-rtchk"
	Nov 08 10:34:00 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:00.857686    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bdac5f1-b816-4d00-96e9-334a4c83aaf5-lib-modules\") pod \"kindnet-7jcpv\" (UID: \"1bdac5f1-b816-4d00-96e9-334a4c83aaf5\") " pod="kube-system/kindnet-7jcpv"
	Nov 08 10:34:00 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:00.857781    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f2268ef-7cb4-455a-a158-38a4a9fed026-xtables-lock\") pod \"kube-proxy-rtchk\" (UID: \"3f2268ef-7cb4-455a-a158-38a4a9fed026\") " pod="kube-system/kube-proxy-rtchk"
	Nov 08 10:34:00 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:00.857884    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f2268ef-7cb4-455a-a158-38a4a9fed026-lib-modules\") pod \"kube-proxy-rtchk\" (UID: \"3f2268ef-7cb4-455a-a158-38a4a9fed026\") " pod="kube-system/kube-proxy-rtchk"
	Nov 08 10:34:01 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:01.026365    1320 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:34:02 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:02.188230    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7jcpv" podStartSLOduration=2.188209526 podStartE2EDuration="2.188209526s" podCreationTimestamp="2025-11-08 10:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:34:02.129293835 +0000 UTC m=+7.381790519" watchObservedRunningTime="2025-11-08 10:34:02.188209526 +0000 UTC m=+7.440706194"
	Nov 08 10:34:05 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:05.818347    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rtchk" podStartSLOduration=5.818330513 podStartE2EDuration="5.818330513s" podCreationTimestamp="2025-11-08 10:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:34:02.233141884 +0000 UTC m=+7.485638544" watchObservedRunningTime="2025-11-08 10:34:05.818330513 +0000 UTC m=+11.070827172"
	Nov 08 10:34:42 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:42.460177    1320 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 10:34:42 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:42.703348    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a37e11d-012b-43a6-bdfb-eed3dee25c16-config-volume\") pod \"coredns-66bc5c9577-x99cj\" (UID: \"0a37e11d-012b-43a6-bdfb-eed3dee25c16\") " pod="kube-system/coredns-66bc5c9577-x99cj"
	Nov 08 10:34:42 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:42.703454    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cda5c093-f604-49d1-90ad-770da6575a3e-tmp\") pod \"storage-provisioner\" (UID: \"cda5c093-f604-49d1-90ad-770da6575a3e\") " pod="kube-system/storage-provisioner"
	Nov 08 10:34:42 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:42.703476    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs6w5\" (UniqueName: \"kubernetes.io/projected/cda5c093-f604-49d1-90ad-770da6575a3e-kube-api-access-cs6w5\") pod \"storage-provisioner\" (UID: \"cda5c093-f604-49d1-90ad-770da6575a3e\") " pod="kube-system/storage-provisioner"
	Nov 08 10:34:42 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:42.703540    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxdf4\" (UniqueName: \"kubernetes.io/projected/0a37e11d-012b-43a6-bdfb-eed3dee25c16-kube-api-access-dxdf4\") pod \"coredns-66bc5c9577-x99cj\" (UID: \"0a37e11d-012b-43a6-bdfb-eed3dee25c16\") " pod="kube-system/coredns-66bc5c9577-x99cj"
	Nov 08 10:34:43 default-k8s-diff-port-236075 kubelet[1320]: W1108 10:34:43.168751    1320 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/crio-5ba11c1787b981cfa8275efb91db44f40609e2053adf3e94a9ee86c286fbbc84 WatchSource:0}: Error finding container 5ba11c1787b981cfa8275efb91db44f40609e2053adf3e94a9ee86c286fbbc84: Status 404 returned error can't find the container with id 5ba11c1787b981cfa8275efb91db44f40609e2053adf3e94a9ee86c286fbbc84
	Nov 08 10:34:44 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:44.285413    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.285396817 podStartE2EDuration="41.285396817s" podCreationTimestamp="2025-11-08 10:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:34:44.24278474 +0000 UTC m=+49.495281417" watchObservedRunningTime="2025-11-08 10:34:44.285396817 +0000 UTC m=+49.537893477"
	Nov 08 10:34:46 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:46.473475    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x99cj" podStartSLOduration=46.473455755 podStartE2EDuration="46.473455755s" podCreationTimestamp="2025-11-08 10:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:34:44.287468451 +0000 UTC m=+49.539965127" watchObservedRunningTime="2025-11-08 10:34:46.473455755 +0000 UTC m=+51.725952414"
	Nov 08 10:34:46 default-k8s-diff-port-236075 kubelet[1320]: I1108 10:34:46.637107    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdj6d\" (UniqueName: \"kubernetes.io/projected/f71e109f-b88f-4781-b4ac-aaabd22ff178-kube-api-access-vdj6d\") pod \"busybox\" (UID: \"f71e109f-b88f-4781-b4ac-aaabd22ff178\") " pod="default/busybox"
	
	
	==> storage-provisioner [4653d4703b0926456ca935181f7c827f409fe11abcfb5f0862fb67f81382b234] <==
	I1108 10:34:43.265216       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:34:43.302834       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:34:43.302903       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:34:43.307686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:43.314756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:34:43.315032       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:34:43.317480       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-236075_fca196e2-d98a-41d1-9182-bb9881d4aa81!
	W1108 10:34:43.317808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:34:43.321549       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee54a8f0-7b96-489a-b394-63ad7711ea02", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-236075_fca196e2-d98a-41d1-9182-bb9881d4aa81 became leader
	W1108 10:34:43.331802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:34:43.417664       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-236075_fca196e2-d98a-41d1-9182-bb9881d4aa81!
	W1108 10:34:45.336557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:45.351028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:47.354187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:47.361091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:49.364563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:49.368845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:51.372232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:51.378997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:53.382299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:53.387856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:55.397646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:34:55.409223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-236075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (309.553148ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:35:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-790346 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-790346 describe deploy/metrics-server -n kube-system: exit status 1 (82.911139ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-790346 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-790346
helpers_test.go:243: (dbg) docker inspect embed-certs-790346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7",
	        "Created": "2025-11-08T10:34:14.160209579Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1216813,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:34:14.225596231Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/hostname",
	        "HostsPath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/hosts",
	        "LogPath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7-json.log",
	        "Name": "/embed-certs-790346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-790346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-790346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7",
	                "LowerDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-790346",
	                "Source": "/var/lib/docker/volumes/embed-certs-790346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-790346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-790346",
	                "name.minikube.sigs.k8s.io": "embed-certs-790346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3e13517a38f1d1ab87cb998bddd27283ba946d9cac5713e774e33ccce403ebc",
	            "SandboxKey": "/var/run/docker/netns/e3e13517a38f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34522"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34523"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34526"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34524"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34525"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-790346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:1f:7b:80:9b:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d495b48ffde5b28a4ff62dc6240c1429227e085b124c5835b7607c15b8bf3dd5",
	                    "EndpointID": "2a5c1b3328e58cd47111397938431c00508f2ba3c668125d77287203db3e1012",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-790346",
	                        "c42811f48049"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790346 -n embed-certs-790346
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-790346 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-790346 logs -n 25: (1.334566989s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-666491                                                                                                                                                                                                                  │ kubernetes-upgrade-666491    │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p force-systemd-env-680693                                                                                                                                                                                                                   │ force-systemd-env-680693     │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:29 UTC │
	│ start   │ -p cert-options-517657 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:29 UTC │ 08 Nov 25 10:30 UTC │
	│ ssh     │ cert-options-517657 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ ssh     │ -p cert-options-517657 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-517657                                                                                                                                                                                                                        │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-171136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-171136 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │ 08 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-171136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ image   │ old-k8s-version-171136 image list --format=json                                                                                                                                                                                               │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-171136 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-837698                                                                                                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-236075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-236075 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-236075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:35:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:35:09.313109 1219770 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:35:09.313230 1219770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:35:09.313242 1219770 out.go:374] Setting ErrFile to fd 2...
	I1108 10:35:09.313248 1219770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:35:09.313513 1219770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:35:09.313940 1219770 out.go:368] Setting JSON to false
	I1108 10:35:09.314929 1219770 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33455,"bootTime":1762564655,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:35:09.315053 1219770 start.go:143] virtualization:  
	I1108 10:35:09.319818 1219770 out.go:179] * [default-k8s-diff-port-236075] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:35:09.322935 1219770 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:35:09.322994 1219770 notify.go:221] Checking for updates...
	I1108 10:35:09.329281 1219770 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:35:09.332218 1219770 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:35:09.335110 1219770 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:35:09.338056 1219770 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:35:09.341066 1219770 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:35:09.344604 1219770 config.go:182] Loaded profile config "default-k8s-diff-port-236075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:35:09.345215 1219770 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:35:09.370217 1219770 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:35:09.370333 1219770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:35:09.451169 1219770 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:35:09.427335686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:35:09.451273 1219770 docker.go:319] overlay module found
	I1108 10:35:09.454442 1219770 out.go:179] * Using the docker driver based on existing profile
	I1108 10:35:09.457417 1219770 start.go:309] selected driver: docker
	I1108 10:35:09.457437 1219770 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-236075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-236075 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:35:09.457551 1219770 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:35:09.458434 1219770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:35:09.531506 1219770 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:35:09.52076638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:35:09.531856 1219770 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:35:09.531891 1219770 cni.go:84] Creating CNI manager for ""
	I1108 10:35:09.531946 1219770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:35:09.531989 1219770 start.go:353] cluster config:
	{Name:default-k8s-diff-port-236075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-236075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:35:09.535262 1219770 out.go:179] * Starting "default-k8s-diff-port-236075" primary control-plane node in "default-k8s-diff-port-236075" cluster
	I1108 10:35:09.538085 1219770 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:35:09.541062 1219770 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:35:09.543872 1219770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:35:09.543943 1219770 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:35:09.543949 1219770 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:35:09.543958 1219770 cache.go:59] Caching tarball of preloaded images
	I1108 10:35:09.544047 1219770 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:35:09.544058 1219770 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:35:09.544164 1219770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/config.json ...
	I1108 10:35:09.564052 1219770 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:35:09.564080 1219770 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:35:09.564098 1219770 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:35:09.564120 1219770 start.go:360] acquireMachinesLock for default-k8s-diff-port-236075: {Name:mk6b91e5c303401c9829ac8a335e9a0f9a68eeab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:35:09.564189 1219770 start.go:364] duration metric: took 45.594µs to acquireMachinesLock for "default-k8s-diff-port-236075"
	I1108 10:35:09.564213 1219770 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:35:09.564228 1219770 fix.go:54] fixHost starting: 
	I1108 10:35:09.564540 1219770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-236075 --format={{.State.Status}}
	I1108 10:35:09.582563 1219770 fix.go:112] recreateIfNeeded on default-k8s-diff-port-236075: state=Stopped err=<nil>
	W1108 10:35:09.582595 1219770 fix.go:138] unexpected machine state, will restart: <nil>
	W1108 10:35:08.963349 1216426 node_ready.go:57] node "embed-certs-790346" has "Ready":"False" status (will retry)
	W1108 10:35:11.461991 1216426 node_ready.go:57] node "embed-certs-790346" has "Ready":"False" status (will retry)
	I1108 10:35:09.585885 1219770 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-236075" ...
	I1108 10:35:09.585969 1219770 cli_runner.go:164] Run: docker start default-k8s-diff-port-236075
	I1108 10:35:09.834045 1219770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-236075 --format={{.State.Status}}
	I1108 10:35:09.857784 1219770 kic.go:430] container "default-k8s-diff-port-236075" state is running.
	I1108 10:35:09.858959 1219770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-236075
	I1108 10:35:09.882421 1219770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/config.json ...
	I1108 10:35:09.882651 1219770 machine.go:94] provisionDockerMachine start ...
	I1108 10:35:09.882995 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:09.907255 1219770 main.go:143] libmachine: Using SSH client type: native
	I1108 10:35:09.907581 1219770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34527 <nil> <nil>}
	I1108 10:35:09.907591 1219770 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:35:09.908201 1219770 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56606->127.0.0.1:34527: read: connection reset by peer
	I1108 10:35:13.064139 1219770 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-236075
	
	I1108 10:35:13.064163 1219770 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-236075"
	I1108 10:35:13.064253 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:13.082201 1219770 main.go:143] libmachine: Using SSH client type: native
	I1108 10:35:13.082516 1219770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34527 <nil> <nil>}
	I1108 10:35:13.082534 1219770 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-236075 && echo "default-k8s-diff-port-236075" | sudo tee /etc/hostname
	I1108 10:35:13.241318 1219770 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-236075
	
	I1108 10:35:13.241394 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:13.260064 1219770 main.go:143] libmachine: Using SSH client type: native
	I1108 10:35:13.260380 1219770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34527 <nil> <nil>}
	I1108 10:35:13.260424 1219770 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-236075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-236075/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-236075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:35:13.413264 1219770 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:35:13.413290 1219770 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:35:13.413363 1219770 ubuntu.go:190] setting up certificates
	I1108 10:35:13.413373 1219770 provision.go:84] configureAuth start
	I1108 10:35:13.413459 1219770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-236075
	I1108 10:35:13.433456 1219770 provision.go:143] copyHostCerts
	I1108 10:35:13.433521 1219770 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:35:13.433538 1219770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:35:13.433614 1219770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:35:13.433718 1219770 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:35:13.433724 1219770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:35:13.433752 1219770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:35:13.433818 1219770 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:35:13.433822 1219770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:35:13.433845 1219770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:35:13.433898 1219770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-236075 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-236075 localhost minikube]
	I1108 10:35:13.554441 1219770 provision.go:177] copyRemoteCerts
	I1108 10:35:13.554508 1219770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:35:13.554573 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:13.572965 1219770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/default-k8s-diff-port-236075/id_rsa Username:docker}
	I1108 10:35:13.681423 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:35:13.699850 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:35:13.719537 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 10:35:13.736846 1219770 provision.go:87] duration metric: took 323.445854ms to configureAuth
	I1108 10:35:13.736918 1219770 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:35:13.737125 1219770 config.go:182] Loaded profile config "default-k8s-diff-port-236075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:35:13.737235 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:13.755086 1219770 main.go:143] libmachine: Using SSH client type: native
	I1108 10:35:13.755391 1219770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34527 <nil> <nil>}
	I1108 10:35:13.755413 1219770 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:35:14.090572 1219770 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:35:14.090594 1219770 machine.go:97] duration metric: took 4.207933681s to provisionDockerMachine
	I1108 10:35:14.090605 1219770 start.go:293] postStartSetup for "default-k8s-diff-port-236075" (driver="docker")
	I1108 10:35:14.090616 1219770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:35:14.090688 1219770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:35:14.090731 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:14.120743 1219770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/default-k8s-diff-port-236075/id_rsa Username:docker}
	I1108 10:35:14.233487 1219770 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:35:14.237089 1219770 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:35:14.237121 1219770 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:35:14.237132 1219770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:35:14.237192 1219770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:35:14.237277 1219770 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:35:14.237384 1219770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:35:14.245054 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:35:14.263363 1219770 start.go:296] duration metric: took 172.742212ms for postStartSetup
	I1108 10:35:14.263456 1219770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:35:14.263503 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:14.280601 1219770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/default-k8s-diff-port-236075/id_rsa Username:docker}
	I1108 10:35:14.381387 1219770 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:35:14.385896 1219770 fix.go:56] duration metric: took 4.821665576s for fixHost
	I1108 10:35:14.385925 1219770 start.go:83] releasing machines lock for "default-k8s-diff-port-236075", held for 4.821723363s
	I1108 10:35:14.385990 1219770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-236075
	I1108 10:35:14.406134 1219770 ssh_runner.go:195] Run: cat /version.json
	I1108 10:35:14.406190 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:14.406511 1219770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:35:14.406574 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:14.426720 1219770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/default-k8s-diff-port-236075/id_rsa Username:docker}
	I1108 10:35:14.427558 1219770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/default-k8s-diff-port-236075/id_rsa Username:docker}
	I1108 10:35:14.532265 1219770 ssh_runner.go:195] Run: systemctl --version
	I1108 10:35:14.627400 1219770 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:35:14.666912 1219770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:35:14.671333 1219770 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:35:14.671408 1219770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:35:14.679518 1219770 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:35:14.679544 1219770 start.go:496] detecting cgroup driver to use...
	I1108 10:35:14.679606 1219770 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:35:14.679674 1219770 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:35:14.695685 1219770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:35:14.709739 1219770 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:35:14.709879 1219770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:35:14.727760 1219770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:35:14.741653 1219770 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:35:14.864824 1219770 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:35:15.015427 1219770 docker.go:234] disabling docker service ...
	I1108 10:35:15.015592 1219770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:35:15.034499 1219770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:35:15.048895 1219770 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:35:15.179902 1219770 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:35:15.302657 1219770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:35:15.315715 1219770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:35:15.330462 1219770 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:35:15.330580 1219770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:35:15.339783 1219770 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:35:15.339912 1219770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:35:15.349222 1219770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:35:15.359130 1219770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:35:15.368153 1219770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:35:15.376552 1219770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:35:15.385651 1219770 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:35:15.393900 1219770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:35:15.402761 1219770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:35:15.414805 1219770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:35:15.422925 1219770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:35:15.549044 1219770 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:35:15.683956 1219770 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:35:15.684024 1219770 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:35:15.687985 1219770 start.go:564] Will wait 60s for crictl version
	I1108 10:35:15.688058 1219770 ssh_runner.go:195] Run: which crictl
	I1108 10:35:15.691460 1219770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:35:15.717703 1219770 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:35:15.717797 1219770 ssh_runner.go:195] Run: crio --version
	I1108 10:35:15.747504 1219770 ssh_runner.go:195] Run: crio --version
	I1108 10:35:15.779503 1219770 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:35:15.782410 1219770 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-236075 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:35:15.798905 1219770 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:35:15.802620 1219770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:35:15.811916 1219770 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-236075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-236075 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:35:15.812041 1219770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:35:15.812102 1219770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:35:15.858779 1219770 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:35:15.858803 1219770 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:35:15.858860 1219770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:35:15.884627 1219770 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:35:15.884649 1219770 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:35:15.884657 1219770 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1108 10:35:15.884757 1219770 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-236075 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-236075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:35:15.884841 1219770 ssh_runner.go:195] Run: crio config
	I1108 10:35:15.943684 1219770 cni.go:84] Creating CNI manager for ""
	I1108 10:35:15.943710 1219770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:35:15.943733 1219770 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:35:15.943756 1219770 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-236075 NodeName:default-k8s-diff-port-236075 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:35:15.943912 1219770 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-236075"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:35:15.943982 1219770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:35:15.951737 1219770 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:35:15.951831 1219770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:35:15.959779 1219770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 10:35:15.973459 1219770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:35:15.987595 1219770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1108 10:35:16.000345 1219770 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:35:16.005352 1219770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:35:16.016115 1219770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:35:16.132714 1219770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:35:16.149833 1219770 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075 for IP: 192.168.85.2
	I1108 10:35:16.149857 1219770 certs.go:195] generating shared ca certs ...
	I1108 10:35:16.149883 1219770 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:35:16.150016 1219770 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:35:16.150096 1219770 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:35:16.150109 1219770 certs.go:257] generating profile certs ...
	I1108 10:35:16.150201 1219770 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.key
	I1108 10:35:16.150271 1219770 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/apiserver.key.221ad755
	I1108 10:35:16.150312 1219770 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/proxy-client.key
	I1108 10:35:16.150448 1219770 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:35:16.150481 1219770 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:35:16.150495 1219770 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:35:16.150518 1219770 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:35:16.150548 1219770 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:35:16.150573 1219770 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:35:16.150625 1219770 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:35:16.151236 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:35:16.171997 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:35:16.192192 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:35:16.216421 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:35:16.241953 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 10:35:16.266132 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:35:16.288932 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:35:16.308427 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:35:16.333055 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:35:16.352039 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:35:16.372357 1219770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:35:16.391755 1219770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:35:16.405057 1219770 ssh_runner.go:195] Run: openssl version
	I1108 10:35:16.411699 1219770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:35:16.421116 1219770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:35:16.425432 1219770 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:35:16.425507 1219770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:35:16.472183 1219770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:35:16.480006 1219770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:35:16.489699 1219770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:35:16.493659 1219770 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:35:16.493737 1219770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:35:16.535244 1219770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:35:16.543742 1219770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:35:16.553230 1219770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:35:16.557216 1219770 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:35:16.557300 1219770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:35:16.598846 1219770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:35:16.606931 1219770 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:35:16.610628 1219770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:35:16.653513 1219770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:35:16.694495 1219770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:35:16.735722 1219770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:35:16.778091 1219770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:35:16.833183 1219770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:35:16.896261 1219770 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-236075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-236075 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:35:16.896410 1219770 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:35:16.896517 1219770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:35:16.998687 1219770 cri.go:89] found id: "01e006bfc6ddabc4f5b52b75d55b814f77b7715ec181a90987b6959c64dc9976"
	I1108 10:35:16.998766 1219770 cri.go:89] found id: "acec2edc4de9822c06eae3e3c3a9f215ef4f521d8d4f7376ca41845506b657b4"
	I1108 10:35:16.998785 1219770 cri.go:89] found id: "fa7185ae3ba9637256692faca55ed64deec71e9effbe9eebdae3f3c26cca6005"
	I1108 10:35:16.998809 1219770 cri.go:89] found id: "7e2e28dd3fc4c2eca9405df29e70031d910548f4d6fcf55d46048b375ddadca6"
	I1108 10:35:16.998844 1219770 cri.go:89] found id: ""
	I1108 10:35:16.998931 1219770 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:35:17.019463 1219770 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:35:17Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:35:17.019586 1219770 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:35:17.034548 1219770 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:35:17.034618 1219770 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:35:17.034701 1219770 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:35:17.056627 1219770 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:35:17.057626 1219770 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-236075" does not appear in /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:35:17.058349 1219770 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-1027379/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-236075" cluster setting kubeconfig missing "default-k8s-diff-port-236075" context setting]
	I1108 10:35:17.059556 1219770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:35:17.061411 1219770 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:35:17.075361 1219770 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:35:17.075449 1219770 kubeadm.go:602] duration metric: took 40.810325ms to restartPrimaryControlPlane
	I1108 10:35:17.075473 1219770 kubeadm.go:403] duration metric: took 179.224274ms to StartCluster
	I1108 10:35:17.075523 1219770 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:35:17.075644 1219770 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:35:17.077565 1219770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:35:17.077933 1219770 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:35:17.078371 1219770 config.go:182] Loaded profile config "default-k8s-diff-port-236075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:35:17.078465 1219770 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:35:17.078670 1219770 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-236075"
	I1108 10:35:17.078710 1219770 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-236075"
	W1108 10:35:17.078747 1219770 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:35:17.078788 1219770 host.go:66] Checking if "default-k8s-diff-port-236075" exists ...
	I1108 10:35:17.079435 1219770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-236075 --format={{.State.Status}}
	I1108 10:35:17.079650 1219770 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-236075"
	I1108 10:35:17.079696 1219770 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-236075"
	W1108 10:35:17.079719 1219770 addons.go:248] addon dashboard should already be in state true
	I1108 10:35:17.079773 1219770 host.go:66] Checking if "default-k8s-diff-port-236075" exists ...
	I1108 10:35:17.079955 1219770 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-236075"
	I1108 10:35:17.079981 1219770 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-236075"
	I1108 10:35:17.080287 1219770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-236075 --format={{.State.Status}}
	I1108 10:35:17.080359 1219770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-236075 --format={{.State.Status}}
	I1108 10:35:17.088669 1219770 out.go:179] * Verifying Kubernetes components...
	I1108 10:35:17.092157 1219770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:35:17.126645 1219770 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:35:17.130701 1219770 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:35:17.134765 1219770 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:35:17.134793 1219770 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:35:17.134874 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:17.149716 1219770 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1108 10:35:13.962109 1216426 node_ready.go:57] node "embed-certs-790346" has "Ready":"False" status (will retry)
	W1108 10:35:16.461338 1216426 node_ready.go:57] node "embed-certs-790346" has "Ready":"False" status (will retry)
	I1108 10:35:17.153347 1219770 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:35:17.153370 1219770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:35:17.153434 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:17.153534 1219770 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-236075"
	W1108 10:35:17.153550 1219770 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:35:17.153576 1219770 host.go:66] Checking if "default-k8s-diff-port-236075" exists ...
	I1108 10:35:17.154008 1219770 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-236075 --format={{.State.Status}}
	I1108 10:35:17.191020 1219770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/default-k8s-diff-port-236075/id_rsa Username:docker}
	I1108 10:35:17.213938 1219770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/default-k8s-diff-port-236075/id_rsa Username:docker}
	I1108 10:35:17.218026 1219770 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:35:17.218050 1219770 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:35:17.218113 1219770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:35:17.249348 1219770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/default-k8s-diff-port-236075/id_rsa Username:docker}
	I1108 10:35:17.444836 1219770 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:35:17.444863 1219770 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:35:17.474082 1219770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:35:17.514771 1219770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:35:17.530932 1219770 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-236075" to be "Ready" ...
	I1108 10:35:17.549345 1219770 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:35:17.549418 1219770 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:35:17.594509 1219770 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:35:17.594576 1219770 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:35:17.596471 1219770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:35:17.633800 1219770 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:35:17.633869 1219770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:35:17.674075 1219770 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:35:17.674147 1219770 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:35:17.712993 1219770 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:35:17.713066 1219770 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:35:17.755067 1219770 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:35:17.755138 1219770 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:35:17.797835 1219770 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:35:17.797907 1219770 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:35:17.859944 1219770 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:35:17.860019 1219770 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:35:17.892988 1219770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:35:21.206279 1219770 node_ready.go:49] node "default-k8s-diff-port-236075" is "Ready"
	I1108 10:35:21.206350 1219770 node_ready.go:38] duration metric: took 3.675336765s for node "default-k8s-diff-port-236075" to be "Ready" ...
	I1108 10:35:21.206389 1219770 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:35:21.206486 1219770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:35:21.566154 1219770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.051304224s)
	I1108 10:35:22.574246 1219770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.9776974s)
	I1108 10:35:22.641463 1219770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.748359675s)
	I1108 10:35:22.641659 1219770 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.435139656s)
	I1108 10:35:22.641713 1219770 api_server.go:72] duration metric: took 5.563717685s to wait for apiserver process to appear ...
	I1108 10:35:22.641739 1219770 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:35:22.641770 1219770 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1108 10:35:22.644557 1219770 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-236075 addons enable metrics-server
	
	I1108 10:35:22.647548 1219770 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	W1108 10:35:18.961736 1216426 node_ready.go:57] node "embed-certs-790346" has "Ready":"False" status (will retry)
	W1108 10:35:21.462232 1216426 node_ready.go:57] node "embed-certs-790346" has "Ready":"False" status (will retry)
	W1108 10:35:23.487503 1216426 node_ready.go:57] node "embed-certs-790346" has "Ready":"False" status (will retry)
	I1108 10:35:22.650422 1219770 addons.go:515] duration metric: took 5.571941781s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1108 10:35:22.654922 1219770 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 10:35:22.654947 1219770 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 10:35:23.142592 1219770 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1108 10:35:23.158248 1219770 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1108 10:35:23.167041 1219770 api_server.go:141] control plane version: v1.34.1
	I1108 10:35:23.167113 1219770 api_server.go:131] duration metric: took 525.352759ms to wait for apiserver health ...
	I1108 10:35:23.167138 1219770 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:35:23.179035 1219770 system_pods.go:59] 8 kube-system pods found
	I1108 10:35:23.179131 1219770 system_pods.go:61] "coredns-66bc5c9577-x99cj" [0a37e11d-012b-43a6-bdfb-eed3dee25c16] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:35:23.179160 1219770 system_pods.go:61] "etcd-default-k8s-diff-port-236075" [48a515c0-6a89-4cf7-b22c-c3cdaafc02fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:35:23.179197 1219770 system_pods.go:61] "kindnet-7jcpv" [1bdac5f1-b816-4d00-96e9-334a4c83aaf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:35:23.179224 1219770 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-236075" [fb4ba6c7-7d01-4104-821b-34e65780d496] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:35:23.179253 1219770 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-236075" [c6cfe97d-ff1c-4910-958c-73e97e1f9944] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:35:23.179284 1219770 system_pods.go:61] "kube-proxy-rtchk" [3f2268ef-7cb4-455a-a158-38a4a9fed026] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:35:23.179326 1219770 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-236075" [12c9dd77-62d2-48a8-8296-bdfec4ca2b99] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:35:23.179353 1219770 system_pods.go:61] "storage-provisioner" [cda5c093-f604-49d1-90ad-770da6575a3e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:35:23.179376 1219770 system_pods.go:74] duration metric: took 12.217624ms to wait for pod list to return data ...
	I1108 10:35:23.179409 1219770 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:35:23.183770 1219770 default_sa.go:45] found service account: "default"
	I1108 10:35:23.183836 1219770 default_sa.go:55] duration metric: took 4.402509ms for default service account to be created ...
	I1108 10:35:23.183861 1219770 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:35:23.190568 1219770 system_pods.go:86] 8 kube-system pods found
	I1108 10:35:23.190655 1219770 system_pods.go:89] "coredns-66bc5c9577-x99cj" [0a37e11d-012b-43a6-bdfb-eed3dee25c16] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:35:23.190681 1219770 system_pods.go:89] "etcd-default-k8s-diff-port-236075" [48a515c0-6a89-4cf7-b22c-c3cdaafc02fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:35:23.190729 1219770 system_pods.go:89] "kindnet-7jcpv" [1bdac5f1-b816-4d00-96e9-334a4c83aaf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:35:23.190760 1219770 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-236075" [fb4ba6c7-7d01-4104-821b-34e65780d496] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:35:23.190783 1219770 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-236075" [c6cfe97d-ff1c-4910-958c-73e97e1f9944] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:35:23.190805 1219770 system_pods.go:89] "kube-proxy-rtchk" [3f2268ef-7cb4-455a-a158-38a4a9fed026] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:35:23.190839 1219770 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-236075" [12c9dd77-62d2-48a8-8296-bdfec4ca2b99] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:35:23.190872 1219770 system_pods.go:89] "storage-provisioner" [cda5c093-f604-49d1-90ad-770da6575a3e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:35:23.190897 1219770 system_pods.go:126] duration metric: took 7.015271ms to wait for k8s-apps to be running ...
	I1108 10:35:23.190921 1219770 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:35:23.190997 1219770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:35:23.217735 1219770 system_svc.go:56] duration metric: took 26.805227ms WaitForService to wait for kubelet
	I1108 10:35:23.217813 1219770 kubeadm.go:587] duration metric: took 6.139820777s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:35:23.217852 1219770 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:35:23.221063 1219770 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:35:23.221139 1219770 node_conditions.go:123] node cpu capacity is 2
	I1108 10:35:23.221168 1219770 node_conditions.go:105] duration metric: took 3.295243ms to run NodePressure ...
	I1108 10:35:23.221213 1219770 start.go:242] waiting for startup goroutines ...
	I1108 10:35:23.221240 1219770 start.go:247] waiting for cluster config update ...
	I1108 10:35:23.221267 1219770 start.go:256] writing updated cluster config ...
	I1108 10:35:23.221568 1219770 ssh_runner.go:195] Run: rm -f paused
	I1108 10:35:23.229900 1219770 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:35:23.272245 1219770 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x99cj" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:35:25.961262 1216426 node_ready.go:57] node "embed-certs-790346" has "Ready":"False" status (will retry)
	W1108 10:35:27.961479 1216426 node_ready.go:57] node "embed-certs-790346" has "Ready":"False" status (will retry)
	W1108 10:35:25.278264 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	W1108 10:35:27.278576 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	W1108 10:35:29.279292 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	W1108 10:35:30.461391 1216426 node_ready.go:57] node "embed-certs-790346" has "Ready":"False" status (will retry)
	I1108 10:35:31.466092 1216426 node_ready.go:49] node "embed-certs-790346" is "Ready"
	I1108 10:35:31.466118 1216426 node_ready.go:38] duration metric: took 40.507971963s for node "embed-certs-790346" to be "Ready" ...
	I1108 10:35:31.466132 1216426 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:35:31.466189 1216426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:35:31.524311 1216426 api_server.go:72] duration metric: took 41.816819698s to wait for apiserver process to appear ...
	I1108 10:35:31.524333 1216426 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:35:31.524350 1216426 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:35:31.536169 1216426 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:35:31.538115 1216426 api_server.go:141] control plane version: v1.34.1
	I1108 10:35:31.538139 1216426 api_server.go:131] duration metric: took 13.79964ms to wait for apiserver health ...
	I1108 10:35:31.538148 1216426 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:35:31.544127 1216426 system_pods.go:59] 8 kube-system pods found
	I1108 10:35:31.544161 1216426 system_pods.go:61] "coredns-66bc5c9577-74xnp" [2be7fc7e-41f5-4dd2-bd38-28d8b7116878] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:35:31.544168 1216426 system_pods.go:61] "etcd-embed-certs-790346" [197baf26-b4ce-4eb3-a0b3-e77ae44ffc82] Running
	I1108 10:35:31.544175 1216426 system_pods.go:61] "kindnet-8978r" [ecd1e33a-2ecd-4aca-88f0-3f7c7546923d] Running
	I1108 10:35:31.544180 1216426 system_pods.go:61] "kube-apiserver-embed-certs-790346" [160ec369-c7d1-415d-bd81-807e8cb09deb] Running
	I1108 10:35:31.544185 1216426 system_pods.go:61] "kube-controller-manager-embed-certs-790346" [981fcf69-b2e5-4632-a888-b709045ba236] Running
	I1108 10:35:31.544190 1216426 system_pods.go:61] "kube-proxy-fx79j" [b9772cfb-4249-49a2-ab14-39aabc3dcc92] Running
	I1108 10:35:31.544194 1216426 system_pods.go:61] "kube-scheduler-embed-certs-790346" [77653d47-f56e-4a9c-b9ab-2f90a97947a8] Running
	I1108 10:35:31.544201 1216426 system_pods.go:61] "storage-provisioner" [30b396c5-a02e-4644-b513-31e6a6daf67b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:35:31.544206 1216426 system_pods.go:74] duration metric: took 6.053085ms to wait for pod list to return data ...
	I1108 10:35:31.544215 1216426 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:35:31.550225 1216426 default_sa.go:45] found service account: "default"
	I1108 10:35:31.550251 1216426 default_sa.go:55] duration metric: took 6.030136ms for default service account to be created ...
	I1108 10:35:31.550262 1216426 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:35:31.558963 1216426 system_pods.go:86] 8 kube-system pods found
	I1108 10:35:31.558994 1216426 system_pods.go:89] "coredns-66bc5c9577-74xnp" [2be7fc7e-41f5-4dd2-bd38-28d8b7116878] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:35:31.559000 1216426 system_pods.go:89] "etcd-embed-certs-790346" [197baf26-b4ce-4eb3-a0b3-e77ae44ffc82] Running
	I1108 10:35:31.559007 1216426 system_pods.go:89] "kindnet-8978r" [ecd1e33a-2ecd-4aca-88f0-3f7c7546923d] Running
	I1108 10:35:31.559011 1216426 system_pods.go:89] "kube-apiserver-embed-certs-790346" [160ec369-c7d1-415d-bd81-807e8cb09deb] Running
	I1108 10:35:31.559015 1216426 system_pods.go:89] "kube-controller-manager-embed-certs-790346" [981fcf69-b2e5-4632-a888-b709045ba236] Running
	I1108 10:35:31.559019 1216426 system_pods.go:89] "kube-proxy-fx79j" [b9772cfb-4249-49a2-ab14-39aabc3dcc92] Running
	I1108 10:35:31.559023 1216426 system_pods.go:89] "kube-scheduler-embed-certs-790346" [77653d47-f56e-4a9c-b9ab-2f90a97947a8] Running
	I1108 10:35:31.559028 1216426 system_pods.go:89] "storage-provisioner" [30b396c5-a02e-4644-b513-31e6a6daf67b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:35:31.559060 1216426 retry.go:31] will retry after 200.922624ms: missing components: kube-dns
	I1108 10:35:31.781869 1216426 system_pods.go:86] 8 kube-system pods found
	I1108 10:35:31.781908 1216426 system_pods.go:89] "coredns-66bc5c9577-74xnp" [2be7fc7e-41f5-4dd2-bd38-28d8b7116878] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:35:31.781915 1216426 system_pods.go:89] "etcd-embed-certs-790346" [197baf26-b4ce-4eb3-a0b3-e77ae44ffc82] Running
	I1108 10:35:31.781921 1216426 system_pods.go:89] "kindnet-8978r" [ecd1e33a-2ecd-4aca-88f0-3f7c7546923d] Running
	I1108 10:35:31.781926 1216426 system_pods.go:89] "kube-apiserver-embed-certs-790346" [160ec369-c7d1-415d-bd81-807e8cb09deb] Running
	I1108 10:35:31.781930 1216426 system_pods.go:89] "kube-controller-manager-embed-certs-790346" [981fcf69-b2e5-4632-a888-b709045ba236] Running
	I1108 10:35:31.781935 1216426 system_pods.go:89] "kube-proxy-fx79j" [b9772cfb-4249-49a2-ab14-39aabc3dcc92] Running
	I1108 10:35:31.781939 1216426 system_pods.go:89] "kube-scheduler-embed-certs-790346" [77653d47-f56e-4a9c-b9ab-2f90a97947a8] Running
	I1108 10:35:31.781947 1216426 system_pods.go:89] "storage-provisioner" [30b396c5-a02e-4644-b513-31e6a6daf67b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:35:31.781962 1216426 retry.go:31] will retry after 266.225ms: missing components: kube-dns
	I1108 10:35:32.053528 1216426 system_pods.go:86] 8 kube-system pods found
	I1108 10:35:32.053563 1216426 system_pods.go:89] "coredns-66bc5c9577-74xnp" [2be7fc7e-41f5-4dd2-bd38-28d8b7116878] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:35:32.053572 1216426 system_pods.go:89] "etcd-embed-certs-790346" [197baf26-b4ce-4eb3-a0b3-e77ae44ffc82] Running
	I1108 10:35:32.053579 1216426 system_pods.go:89] "kindnet-8978r" [ecd1e33a-2ecd-4aca-88f0-3f7c7546923d] Running
	I1108 10:35:32.053583 1216426 system_pods.go:89] "kube-apiserver-embed-certs-790346" [160ec369-c7d1-415d-bd81-807e8cb09deb] Running
	I1108 10:35:32.053588 1216426 system_pods.go:89] "kube-controller-manager-embed-certs-790346" [981fcf69-b2e5-4632-a888-b709045ba236] Running
	I1108 10:35:32.053592 1216426 system_pods.go:89] "kube-proxy-fx79j" [b9772cfb-4249-49a2-ab14-39aabc3dcc92] Running
	I1108 10:35:32.053596 1216426 system_pods.go:89] "kube-scheduler-embed-certs-790346" [77653d47-f56e-4a9c-b9ab-2f90a97947a8] Running
	I1108 10:35:32.053602 1216426 system_pods.go:89] "storage-provisioner" [30b396c5-a02e-4644-b513-31e6a6daf67b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:35:32.053623 1216426 retry.go:31] will retry after 302.721757ms: missing components: kube-dns
	I1108 10:35:32.360155 1216426 system_pods.go:86] 8 kube-system pods found
	I1108 10:35:32.360194 1216426 system_pods.go:89] "coredns-66bc5c9577-74xnp" [2be7fc7e-41f5-4dd2-bd38-28d8b7116878] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:35:32.360201 1216426 system_pods.go:89] "etcd-embed-certs-790346" [197baf26-b4ce-4eb3-a0b3-e77ae44ffc82] Running
	I1108 10:35:32.360208 1216426 system_pods.go:89] "kindnet-8978r" [ecd1e33a-2ecd-4aca-88f0-3f7c7546923d] Running
	I1108 10:35:32.360213 1216426 system_pods.go:89] "kube-apiserver-embed-certs-790346" [160ec369-c7d1-415d-bd81-807e8cb09deb] Running
	I1108 10:35:32.360218 1216426 system_pods.go:89] "kube-controller-manager-embed-certs-790346" [981fcf69-b2e5-4632-a888-b709045ba236] Running
	I1108 10:35:32.360222 1216426 system_pods.go:89] "kube-proxy-fx79j" [b9772cfb-4249-49a2-ab14-39aabc3dcc92] Running
	I1108 10:35:32.360226 1216426 system_pods.go:89] "kube-scheduler-embed-certs-790346" [77653d47-f56e-4a9c-b9ab-2f90a97947a8] Running
	I1108 10:35:32.360232 1216426 system_pods.go:89] "storage-provisioner" [30b396c5-a02e-4644-b513-31e6a6daf67b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:35:32.360247 1216426 retry.go:31] will retry after 381.023839ms: missing components: kube-dns
	I1108 10:35:32.745428 1216426 system_pods.go:86] 8 kube-system pods found
	I1108 10:35:32.745458 1216426 system_pods.go:89] "coredns-66bc5c9577-74xnp" [2be7fc7e-41f5-4dd2-bd38-28d8b7116878] Running
	I1108 10:35:32.745465 1216426 system_pods.go:89] "etcd-embed-certs-790346" [197baf26-b4ce-4eb3-a0b3-e77ae44ffc82] Running
	I1108 10:35:32.745470 1216426 system_pods.go:89] "kindnet-8978r" [ecd1e33a-2ecd-4aca-88f0-3f7c7546923d] Running
	I1108 10:35:32.745474 1216426 system_pods.go:89] "kube-apiserver-embed-certs-790346" [160ec369-c7d1-415d-bd81-807e8cb09deb] Running
	I1108 10:35:32.745479 1216426 system_pods.go:89] "kube-controller-manager-embed-certs-790346" [981fcf69-b2e5-4632-a888-b709045ba236] Running
	I1108 10:35:32.745482 1216426 system_pods.go:89] "kube-proxy-fx79j" [b9772cfb-4249-49a2-ab14-39aabc3dcc92] Running
	I1108 10:35:32.745487 1216426 system_pods.go:89] "kube-scheduler-embed-certs-790346" [77653d47-f56e-4a9c-b9ab-2f90a97947a8] Running
	I1108 10:35:32.745490 1216426 system_pods.go:89] "storage-provisioner" [30b396c5-a02e-4644-b513-31e6a6daf67b] Running
	I1108 10:35:32.745499 1216426 system_pods.go:126] duration metric: took 1.195230213s to wait for k8s-apps to be running ...
	I1108 10:35:32.745507 1216426 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:35:32.745564 1216426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:35:32.783088 1216426 system_svc.go:56] duration metric: took 37.557298ms WaitForService to wait for kubelet
	I1108 10:35:32.783166 1216426 kubeadm.go:587] duration metric: took 43.075679684s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:35:32.783202 1216426 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:35:32.789555 1216426 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:35:32.789634 1216426 node_conditions.go:123] node cpu capacity is 2
	I1108 10:35:32.789662 1216426 node_conditions.go:105] duration metric: took 6.441217ms to run NodePressure ...
	I1108 10:35:32.789688 1216426 start.go:242] waiting for startup goroutines ...
	I1108 10:35:32.789722 1216426 start.go:247] waiting for cluster config update ...
	I1108 10:35:32.789750 1216426 start.go:256] writing updated cluster config ...
	I1108 10:35:32.790087 1216426 ssh_runner.go:195] Run: rm -f paused
	I1108 10:35:32.795320 1216426 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:35:32.845741 1216426 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-74xnp" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:35:32.851624 1216426 pod_ready.go:94] pod "coredns-66bc5c9577-74xnp" is "Ready"
	I1108 10:35:32.851696 1216426 pod_ready.go:86] duration metric: took 5.879921ms for pod "coredns-66bc5c9577-74xnp" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:35:32.854660 1216426 pod_ready.go:83] waiting for pod "etcd-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:35:32.865428 1216426 pod_ready.go:94] pod "etcd-embed-certs-790346" is "Ready"
	I1108 10:35:32.865502 1216426 pod_ready.go:86] duration metric: took 10.774263ms for pod "etcd-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:35:32.869728 1216426 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:35:32.877656 1216426 pod_ready.go:94] pod "kube-apiserver-embed-certs-790346" is "Ready"
	I1108 10:35:32.877727 1216426 pod_ready.go:86] duration metric: took 7.930057ms for pod "kube-apiserver-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:35:32.881380 1216426 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:35:33.201928 1216426 pod_ready.go:94] pod "kube-controller-manager-embed-certs-790346" is "Ready"
	I1108 10:35:33.202030 1216426 pod_ready.go:86] duration metric: took 320.575488ms for pod "kube-controller-manager-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:35:33.400665 1216426 pod_ready.go:83] waiting for pod "kube-proxy-fx79j" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:35:31.783328 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	W1108 10:35:34.277444 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	I1108 10:35:33.800135 1216426 pod_ready.go:94] pod "kube-proxy-fx79j" is "Ready"
	I1108 10:35:33.800162 1216426 pod_ready.go:86] duration metric: took 399.422017ms for pod "kube-proxy-fx79j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:35:34.000949 1216426 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:35:34.399797 1216426 pod_ready.go:94] pod "kube-scheduler-embed-certs-790346" is "Ready"
	I1108 10:35:34.399828 1216426 pod_ready.go:86] duration metric: took 398.850212ms for pod "kube-scheduler-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:35:34.399841 1216426 pod_ready.go:40] duration metric: took 1.60445387s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:35:34.465019 1216426 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:35:34.468106 1216426 out.go:179] * Done! kubectl is now configured to use "embed-certs-790346" cluster and "default" namespace by default
	W1108 10:35:36.278593 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	W1108 10:35:38.777633 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 08 10:35:31 embed-certs-790346 crio[840]: time="2025-11-08T10:35:31.644503164Z" level=info msg="Starting container: f52561a7e1ac85503c9573b6dd74fdb8c138d43ca38e9c593a471c2486eb4ccd" id=b646c76f-8255-4a06-92c0-1064da0aa090 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:35:31 embed-certs-790346 crio[840]: time="2025-11-08T10:35:31.646173144Z" level=info msg="Started container" PID=1780 containerID=eb3dad65d54be10373313abb990b3d33704ab6fb958c12fffc9324a1d870ff2f description=kube-system/storage-provisioner/storage-provisioner id=f3b107c4-58ba-4fb2-9920-f130081441b9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9fd30624bc4a09e70f19132ac8c957252cb45b918fbcb646113ac46d4eb073eb
	Nov 08 10:35:31 embed-certs-790346 crio[840]: time="2025-11-08T10:35:31.652350738Z" level=info msg="Started container" PID=1782 containerID=f52561a7e1ac85503c9573b6dd74fdb8c138d43ca38e9c593a471c2486eb4ccd description=kube-system/coredns-66bc5c9577-74xnp/coredns id=b646c76f-8255-4a06-92c0-1064da0aa090 name=/runtime.v1.RuntimeService/StartContainer sandboxID=56917a8e0e2a382f7fd5bfaeb5f682f0463b0ba27f5d3f3597235c5bcf8ddee3
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.026939014Z" level=info msg="Running pod sandbox: default/busybox/POD" id=08e7f9c3-0740-45ce-a9a9-0e970729a684 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.027031171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.033827897Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:737ff46c7a879f5fc36ae0451c4c1fd4677727b7e4532cd1533686bd9269d3fc UID:85b2c572-22bf-44ec-98e1-3e867fa1882e NetNS:/var/run/netns/8597d803-1356-4d5a-a215-6d650f1799d3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d9a8}] Aliases:map[]}"
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.033866378Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.053991103Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:737ff46c7a879f5fc36ae0451c4c1fd4677727b7e4532cd1533686bd9269d3fc UID:85b2c572-22bf-44ec-98e1-3e867fa1882e NetNS:/var/run/netns/8597d803-1356-4d5a-a215-6d650f1799d3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d9a8}] Aliases:map[]}"
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.054138545Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.058058263Z" level=info msg="Ran pod sandbox 737ff46c7a879f5fc36ae0451c4c1fd4677727b7e4532cd1533686bd9269d3fc with infra container: default/busybox/POD" id=08e7f9c3-0740-45ce-a9a9-0e970729a684 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.061568704Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d1d4c670-c351-4b31-b9f5-db80f1084a8b name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.061701787Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d1d4c670-c351-4b31-b9f5-db80f1084a8b name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.061742951Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d1d4c670-c351-4b31-b9f5-db80f1084a8b name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.062847861Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=020fbf95-d454-4cc1-9ecf-5b8e3f194f7c name=/runtime.v1.ImageService/PullImage
	Nov 08 10:35:35 embed-certs-790346 crio[840]: time="2025-11-08T10:35:35.065629496Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 10:35:37 embed-certs-790346 crio[840]: time="2025-11-08T10:35:37.305271652Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=020fbf95-d454-4cc1-9ecf-5b8e3f194f7c name=/runtime.v1.ImageService/PullImage
	Nov 08 10:35:37 embed-certs-790346 crio[840]: time="2025-11-08T10:35:37.306088867Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c9aa25bf-8024-4b09-9151-388f40faaf3e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:35:37 embed-certs-790346 crio[840]: time="2025-11-08T10:35:37.308766841Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bdcf8bdc-b184-43c7-a813-836734dc57b5 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:35:37 embed-certs-790346 crio[840]: time="2025-11-08T10:35:37.314818187Z" level=info msg="Creating container: default/busybox/busybox" id=940ad025-5f09-47bc-8e96-f250ec114040 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:35:37 embed-certs-790346 crio[840]: time="2025-11-08T10:35:37.314948456Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:35:37 embed-certs-790346 crio[840]: time="2025-11-08T10:35:37.319662274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:35:37 embed-certs-790346 crio[840]: time="2025-11-08T10:35:37.320106322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:35:37 embed-certs-790346 crio[840]: time="2025-11-08T10:35:37.33446488Z" level=info msg="Created container c6c35a2f08dae3caaa8648268c812e63d9801e078e4fef141d7f721d59987525: default/busybox/busybox" id=940ad025-5f09-47bc-8e96-f250ec114040 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:35:37 embed-certs-790346 crio[840]: time="2025-11-08T10:35:37.335422332Z" level=info msg="Starting container: c6c35a2f08dae3caaa8648268c812e63d9801e078e4fef141d7f721d59987525" id=59a22334-9371-48a6-8c37-e76fe0b80ba5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:35:37 embed-certs-790346 crio[840]: time="2025-11-08T10:35:37.337449815Z" level=info msg="Started container" PID=1844 containerID=c6c35a2f08dae3caaa8648268c812e63d9801e078e4fef141d7f721d59987525 description=default/busybox/busybox id=59a22334-9371-48a6-8c37-e76fe0b80ba5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=737ff46c7a879f5fc36ae0451c4c1fd4677727b7e4532cd1533686bd9269d3fc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	c6c35a2f08dae       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   6 seconds ago        Running             busybox                   0                   737ff46c7a879       busybox                                      default
	f52561a7e1ac8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   56917a8e0e2a3       coredns-66bc5c9577-74xnp                     kube-system
	eb3dad65d54be       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   9fd30624bc4a0       storage-provisioner                          kube-system
	eff4a4412a052       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   ea75d5b0a5315       kindnet-8978r                                kube-system
	ca285527263b2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   ce874f5051672       kube-proxy-fx79j                             kube-system
	b91b079321ca3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   e1c40cbca5269       kube-controller-manager-embed-certs-790346   kube-system
	5bc5b343b704c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   1f4a54df0acac       etcd-embed-certs-790346                      kube-system
	9db60ca07d101       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   e7c53a234a367       kube-apiserver-embed-certs-790346            kube-system
	5171fc4e78426       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   66fe909409c98       kube-scheduler-embed-certs-790346            kube-system
	
	
	==> coredns [f52561a7e1ac85503c9573b6dd74fdb8c138d43ca38e9c593a471c2486eb4ccd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45916 - 27137 "HINFO IN 3795562552630843097.7085021179604255506. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011762755s
	
	
	==> describe nodes <==
	Name:               embed-certs-790346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-790346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=embed-certs-790346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_34_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:34:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-790346
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:35:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:35:31 +0000   Sat, 08 Nov 2025 10:34:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:35:31 +0000   Sat, 08 Nov 2025 10:34:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:35:31 +0000   Sat, 08 Nov 2025 10:34:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:35:31 +0000   Sat, 08 Nov 2025 10:35:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-790346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                eee914a9-8e5e-440d-b038-b0a41c7677a4
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-74xnp                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-790346                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-8978r                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-790346             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-790346    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-fx79j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-790346             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 54s   kube-proxy       
	  Normal   Starting                 60s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s   kubelet          Node embed-certs-790346 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s   kubelet          Node embed-certs-790346 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s   kubelet          Node embed-certs-790346 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s   node-controller  Node embed-certs-790346 event: Registered Node embed-certs-790346 in Controller
	  Normal   NodeReady                13s   kubelet          Node embed-certs-790346 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 10:13] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[ +18.424643] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5bc5b343b704c6f2699508fad0574364cb017b12c8aa3dc6084fc8491ea8877e] <==
	{"level":"warn","ts":"2025-11-08T10:34:39.463045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.492045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.515645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.533761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.550622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.567350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.592635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.621629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.653023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.677215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.694934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.729891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.740296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.768695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.811520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.845303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.884866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.914763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.936681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.954861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:39.972853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:40.022929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:40.047161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:40.092553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:34:40.284624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35716","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:35:44 up  9:18,  0 user,  load average: 3.32, 3.57, 2.97
	Linux embed-certs-790346 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eff4a4412a052c27022380d55de2963d26a287b6d58daa2d7fbb8fb099057316] <==
	I1108 10:34:50.443435       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:34:50.443798       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:34:50.444104       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:34:50.509708       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:34:50.509747       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:34:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:34:50.630208       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:34:50.630280       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:34:50.630311       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:34:50.630507       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:35:20.629727       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 10:35:20.629893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:35:20.629996       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:35:20.631320       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1108 10:35:22.231169       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:35:22.231213       1 metrics.go:72] Registering metrics
	I1108 10:35:22.231291       1 controller.go:711] "Syncing nftables rules"
	I1108 10:35:30.636003       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:35:30.636057       1 main.go:301] handling current node
	I1108 10:35:40.629744       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:35:40.629787       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9db60ca07d101a05205c0a768b55735aa00f2379cfdd7b19446299998f4bb088] <==
	I1108 10:34:41.515949       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:34:41.528836       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:34:41.542896       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 10:34:41.543678       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:34:41.572049       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:34:41.588114       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:34:41.588223       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 10:34:42.208819       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 10:34:42.231881       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 10:34:42.232089       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:34:43.225543       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:34:43.292045       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:34:43.444218       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 10:34:43.456744       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1108 10:34:43.458048       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:34:43.469660       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:34:43.536184       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:34:44.206683       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:34:44.260237       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 10:34:44.276175       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 10:34:49.388682       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:34:49.539280       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:34:49.544951       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:34:49.589489       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1108 10:35:42.852937       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:39234: use of closed network connection
	
	
	==> kube-controller-manager [b91b079321ca36768dac4c3c961678257fb1810d9bae6b3a6078508b908b661c] <==
	I1108 10:34:48.586291       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 10:34:48.586345       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 10:34:48.586561       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:34:48.586622       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:34:48.586661       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:34:48.586731       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:34:48.586968       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:34:48.587025       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:34:48.587064       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:34:48.587177       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 10:34:48.586590       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 10:34:48.587330       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 10:34:48.597614       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:34:48.613143       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:34:48.624600       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:34:48.631489       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 10:34:48.631914       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:34:48.631973       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:34:48.631986       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:34:48.631993       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:34:48.636512       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 10:34:48.636556       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 10:34:48.636690       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:34:48.639135       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:35:33.554001       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ca285527263b25e472fd98d1c011b438554c185d1a8e880f1a0383002deacd1c] <==
	I1108 10:34:50.215303       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:34:50.286175       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:34:50.387106       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:34:50.387139       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:34:50.387213       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:34:50.486511       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:34:50.486567       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:34:50.520688       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:34:50.527985       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:34:50.544376       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:34:50.546089       1 config.go:200] "Starting service config controller"
	I1108 10:34:50.546100       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:34:50.546116       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:34:50.546121       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:34:50.546132       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:34:50.546137       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:34:50.547168       1 config.go:309] "Starting node config controller"
	I1108 10:34:50.547177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:34:50.547183       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:34:50.646228       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:34:50.646263       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:34:50.646309       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5171fc4e784265f2941a8caa3af77f2138fd60ca818219a48fa8a7d1037b58a3] <==
	E1108 10:34:41.644597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:34:41.644678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:34:41.644750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:34:41.644822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:34:41.644998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:34:41.645083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:34:41.645156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:34:41.645279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:34:41.645362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 10:34:41.645630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:34:41.645719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:34:42.450616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1108 10:34:42.450846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:34:42.460829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:34:42.496333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:34:42.496741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:34:42.544672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:34:42.568629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:34:42.589484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:34:42.667635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:34:42.712572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:34:42.726851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:34:42.785089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 10:34:42.864211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1108 10:34:45.135487       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:34:45 embed-certs-790346 kubelet[1341]: I1108 10:34:45.512913    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-790346" podStartSLOduration=1.512896428 podStartE2EDuration="1.512896428s" podCreationTimestamp="2025-11-08 10:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:34:45.482853658 +0000 UTC m=+1.427191927" watchObservedRunningTime="2025-11-08 10:34:45.512896428 +0000 UTC m=+1.457234673"
	Nov 08 10:34:45 embed-certs-790346 kubelet[1341]: I1108 10:34:45.532652    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-790346" podStartSLOduration=1.5326321219999999 podStartE2EDuration="1.532632122s" podCreationTimestamp="2025-11-08 10:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:34:45.513246259 +0000 UTC m=+1.457584512" watchObservedRunningTime="2025-11-08 10:34:45.532632122 +0000 UTC m=+1.476970367"
	Nov 08 10:34:45 embed-certs-790346 kubelet[1341]: I1108 10:34:45.561937    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-790346" podStartSLOduration=1.561917024 podStartE2EDuration="1.561917024s" podCreationTimestamp="2025-11-08 10:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:34:45.532980313 +0000 UTC m=+1.477318558" watchObservedRunningTime="2025-11-08 10:34:45.561917024 +0000 UTC m=+1.506255269"
	Nov 08 10:34:48 embed-certs-790346 kubelet[1341]: I1108 10:34:48.591107    1341 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 10:34:48 embed-certs-790346 kubelet[1341]: I1108 10:34:48.592897    1341 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 10:34:49 embed-certs-790346 kubelet[1341]: I1108 10:34:49.738929    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9772cfb-4249-49a2-ab14-39aabc3dcc92-kube-proxy\") pod \"kube-proxy-fx79j\" (UID: \"b9772cfb-4249-49a2-ab14-39aabc3dcc92\") " pod="kube-system/kube-proxy-fx79j"
	Nov 08 10:34:49 embed-certs-790346 kubelet[1341]: I1108 10:34:49.738971    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecd1e33a-2ecd-4aca-88f0-3f7c7546923d-xtables-lock\") pod \"kindnet-8978r\" (UID: \"ecd1e33a-2ecd-4aca-88f0-3f7c7546923d\") " pod="kube-system/kindnet-8978r"
	Nov 08 10:34:49 embed-certs-790346 kubelet[1341]: I1108 10:34:49.739011    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecd1e33a-2ecd-4aca-88f0-3f7c7546923d-lib-modules\") pod \"kindnet-8978r\" (UID: \"ecd1e33a-2ecd-4aca-88f0-3f7c7546923d\") " pod="kube-system/kindnet-8978r"
	Nov 08 10:34:49 embed-certs-790346 kubelet[1341]: I1108 10:34:49.739028    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9772cfb-4249-49a2-ab14-39aabc3dcc92-xtables-lock\") pod \"kube-proxy-fx79j\" (UID: \"b9772cfb-4249-49a2-ab14-39aabc3dcc92\") " pod="kube-system/kube-proxy-fx79j"
	Nov 08 10:34:49 embed-certs-790346 kubelet[1341]: I1108 10:34:49.739075    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ecd1e33a-2ecd-4aca-88f0-3f7c7546923d-cni-cfg\") pod \"kindnet-8978r\" (UID: \"ecd1e33a-2ecd-4aca-88f0-3f7c7546923d\") " pod="kube-system/kindnet-8978r"
	Nov 08 10:34:49 embed-certs-790346 kubelet[1341]: I1108 10:34:49.739095    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9772cfb-4249-49a2-ab14-39aabc3dcc92-lib-modules\") pod \"kube-proxy-fx79j\" (UID: \"b9772cfb-4249-49a2-ab14-39aabc3dcc92\") " pod="kube-system/kube-proxy-fx79j"
	Nov 08 10:34:49 embed-certs-790346 kubelet[1341]: I1108 10:34:49.739112    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26mvm\" (UniqueName: \"kubernetes.io/projected/b9772cfb-4249-49a2-ab14-39aabc3dcc92-kube-api-access-26mvm\") pod \"kube-proxy-fx79j\" (UID: \"b9772cfb-4249-49a2-ab14-39aabc3dcc92\") " pod="kube-system/kube-proxy-fx79j"
	Nov 08 10:34:49 embed-certs-790346 kubelet[1341]: I1108 10:34:49.739136    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk46m\" (UniqueName: \"kubernetes.io/projected/ecd1e33a-2ecd-4aca-88f0-3f7c7546923d-kube-api-access-nk46m\") pod \"kindnet-8978r\" (UID: \"ecd1e33a-2ecd-4aca-88f0-3f7c7546923d\") " pod="kube-system/kindnet-8978r"
	Nov 08 10:34:49 embed-certs-790346 kubelet[1341]: I1108 10:34:49.938702    1341 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:34:50 embed-certs-790346 kubelet[1341]: W1108 10:34:50.237927    1341 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/crio-ea75d5b0a531501b6127798a317c91820e062d0d7707dedaaae09aa946c518d6 WatchSource:0}: Error finding container ea75d5b0a531501b6127798a317c91820e062d0d7707dedaaae09aa946c518d6: Status 404 returned error can't find the container with id ea75d5b0a531501b6127798a317c91820e062d0d7707dedaaae09aa946c518d6
	Nov 08 10:34:50 embed-certs-790346 kubelet[1341]: I1108 10:34:50.432103    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8978r" podStartSLOduration=1.43207164 podStartE2EDuration="1.43207164s" podCreationTimestamp="2025-11-08 10:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:34:50.43157668 +0000 UTC m=+6.375914933" watchObservedRunningTime="2025-11-08 10:34:50.43207164 +0000 UTC m=+6.376409884"
	Nov 08 10:34:51 embed-certs-790346 kubelet[1341]: I1108 10:34:51.844568    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fx79j" podStartSLOduration=2.8445488 podStartE2EDuration="2.8445488s" podCreationTimestamp="2025-11-08 10:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:34:50.464654985 +0000 UTC m=+6.408993263" watchObservedRunningTime="2025-11-08 10:34:51.8445488 +0000 UTC m=+7.788887053"
	Nov 08 10:35:31 embed-certs-790346 kubelet[1341]: I1108 10:35:31.085759    1341 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 10:35:31 embed-certs-790346 kubelet[1341]: I1108 10:35:31.146724    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmwrz\" (UniqueName: \"kubernetes.io/projected/2be7fc7e-41f5-4dd2-bd38-28d8b7116878-kube-api-access-vmwrz\") pod \"coredns-66bc5c9577-74xnp\" (UID: \"2be7fc7e-41f5-4dd2-bd38-28d8b7116878\") " pod="kube-system/coredns-66bc5c9577-74xnp"
	Nov 08 10:35:31 embed-certs-790346 kubelet[1341]: I1108 10:35:31.146782    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2be7fc7e-41f5-4dd2-bd38-28d8b7116878-config-volume\") pod \"coredns-66bc5c9577-74xnp\" (UID: \"2be7fc7e-41f5-4dd2-bd38-28d8b7116878\") " pod="kube-system/coredns-66bc5c9577-74xnp"
	Nov 08 10:35:31 embed-certs-790346 kubelet[1341]: I1108 10:35:31.247293    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/30b396c5-a02e-4644-b513-31e6a6daf67b-tmp\") pod \"storage-provisioner\" (UID: \"30b396c5-a02e-4644-b513-31e6a6daf67b\") " pod="kube-system/storage-provisioner"
	Nov 08 10:35:31 embed-certs-790346 kubelet[1341]: I1108 10:35:31.247348    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49brt\" (UniqueName: \"kubernetes.io/projected/30b396c5-a02e-4644-b513-31e6a6daf67b-kube-api-access-49brt\") pod \"storage-provisioner\" (UID: \"30b396c5-a02e-4644-b513-31e6a6daf67b\") " pod="kube-system/storage-provisioner"
	Nov 08 10:35:32 embed-certs-790346 kubelet[1341]: I1108 10:35:32.614599    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-74xnp" podStartSLOduration=43.614578449 podStartE2EDuration="43.614578449s" podCreationTimestamp="2025-11-08 10:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:35:32.590981418 +0000 UTC m=+48.535319671" watchObservedRunningTime="2025-11-08 10:35:32.614578449 +0000 UTC m=+48.558916702"
	Nov 08 10:35:34 embed-certs-790346 kubelet[1341]: I1108 10:35:34.715671    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=44.715651243 podStartE2EDuration="44.715651243s" podCreationTimestamp="2025-11-08 10:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:35:32.655092892 +0000 UTC m=+48.599431137" watchObservedRunningTime="2025-11-08 10:35:34.715651243 +0000 UTC m=+50.659989488"
	Nov 08 10:35:34 embed-certs-790346 kubelet[1341]: I1108 10:35:34.791989    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24bs5\" (UniqueName: \"kubernetes.io/projected/85b2c572-22bf-44ec-98e1-3e867fa1882e-kube-api-access-24bs5\") pod \"busybox\" (UID: \"85b2c572-22bf-44ec-98e1-3e867fa1882e\") " pod="default/busybox"
	
	
	==> storage-provisioner [eb3dad65d54be10373313abb990b3d33704ab6fb958c12fffc9324a1d870ff2f] <==
	I1108 10:35:31.669020       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:35:31.707493       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:35:31.707595       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:35:31.723204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:31.733235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:35:31.733379       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:35:31.738335       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-790346_fab17371-186d-4df9-a8dd-c156549c763e!
	I1108 10:35:31.738415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58b9af2b-5b91-43b5-9be5-4a96191976d2", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-790346_fab17371-186d-4df9-a8dd-c156549c763e became leader
	W1108 10:35:31.748600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:31.771402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:35:31.838924       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-790346_fab17371-186d-4df9-a8dd-c156549c763e!
	W1108 10:35:33.775103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:33.782794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:35.786024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:35.790748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:37.793920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:37.798329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:39.801420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:39.806110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:41.809784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:41.819425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:43.823358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:43.839890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790346 -n embed-certs-790346
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-790346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-236075 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-236075 --alsologtostderr -v=1: exit status 80 (2.232422573s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-236075 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:36:14.206818 1224689 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:36:14.206991 1224689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:36:14.207004 1224689 out.go:374] Setting ErrFile to fd 2...
	I1108 10:36:14.207009 1224689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:36:14.207299 1224689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:36:14.207587 1224689 out.go:368] Setting JSON to false
	I1108 10:36:14.207613 1224689 mustload.go:66] Loading cluster: default-k8s-diff-port-236075
	I1108 10:36:14.208026 1224689 config.go:182] Loaded profile config "default-k8s-diff-port-236075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:36:14.208611 1224689 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-236075 --format={{.State.Status}}
	I1108 10:36:14.230238 1224689 host.go:66] Checking if "default-k8s-diff-port-236075" exists ...
	I1108 10:36:14.230731 1224689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:36:14.335258 1224689 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-08 10:36:14.325572875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:36:14.335984 1224689 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-236075 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:36:14.339835 1224689 out.go:179] * Pausing node default-k8s-diff-port-236075 ... 
	I1108 10:36:14.342762 1224689 host.go:66] Checking if "default-k8s-diff-port-236075" exists ...
	I1108 10:36:14.343108 1224689 ssh_runner.go:195] Run: systemctl --version
	I1108 10:36:14.343160 1224689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-236075
	I1108 10:36:14.363451 1224689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/default-k8s-diff-port-236075/id_rsa Username:docker}
	I1108 10:36:14.467214 1224689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:36:14.505171 1224689 pause.go:52] kubelet running: true
	I1108 10:36:14.505253 1224689 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:36:14.843678 1224689 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:36:14.843763 1224689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:36:14.915628 1224689 cri.go:89] found id: "13e1625e444fc357bab28f8b30257f63116424e84a77d8a8e1251a97e1e2f759"
	I1108 10:36:14.915658 1224689 cri.go:89] found id: "3f4eafd65d1d0509a5aa57695cc1c4d02ae484f6de117480550722edeb2c155e"
	I1108 10:36:14.915664 1224689 cri.go:89] found id: "2e334bec697058bae86b58475b1c435cee36106778bc232276551557d398810c"
	I1108 10:36:14.915667 1224689 cri.go:89] found id: "d156164180806a75e51f45a02fba01ad1a09a5d84bc02c3049c5b2256db77b0e"
	I1108 10:36:14.915671 1224689 cri.go:89] found id: "055c9437ada6c108a9ef6e524d0a66bf1dfcc081baabb70652559e4f149edd8d"
	I1108 10:36:14.915674 1224689 cri.go:89] found id: "01e006bfc6ddabc4f5b52b75d55b814f77b7715ec181a90987b6959c64dc9976"
	I1108 10:36:14.915677 1224689 cri.go:89] found id: "acec2edc4de9822c06eae3e3c3a9f215ef4f521d8d4f7376ca41845506b657b4"
	I1108 10:36:14.915680 1224689 cri.go:89] found id: "fa7185ae3ba9637256692faca55ed64deec71e9effbe9eebdae3f3c26cca6005"
	I1108 10:36:14.915683 1224689 cri.go:89] found id: "7e2e28dd3fc4c2eca9405df29e70031d910548f4d6fcf55d46048b375ddadca6"
	I1108 10:36:14.915689 1224689 cri.go:89] found id: "9248499b7cf3dde2e3a3d480cca7fb372cdc9053f05a33387e99151065e29b36"
	I1108 10:36:14.915692 1224689 cri.go:89] found id: "0fddc1f75d93c56972bc3e7f7b7bfa6c0c0e4208c9f791848d50f9dd3ddbeda3"
	I1108 10:36:14.915695 1224689 cri.go:89] found id: ""
	I1108 10:36:14.915751 1224689 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:36:14.929521 1224689 retry.go:31] will retry after 325.685185ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:36:14Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:36:15.255975 1224689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:36:15.273125 1224689 pause.go:52] kubelet running: false
	I1108 10:36:15.273187 1224689 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:36:15.472642 1224689 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:36:15.472719 1224689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:36:15.564809 1224689 cri.go:89] found id: "13e1625e444fc357bab28f8b30257f63116424e84a77d8a8e1251a97e1e2f759"
	I1108 10:36:15.564834 1224689 cri.go:89] found id: "3f4eafd65d1d0509a5aa57695cc1c4d02ae484f6de117480550722edeb2c155e"
	I1108 10:36:15.564839 1224689 cri.go:89] found id: "2e334bec697058bae86b58475b1c435cee36106778bc232276551557d398810c"
	I1108 10:36:15.564843 1224689 cri.go:89] found id: "d156164180806a75e51f45a02fba01ad1a09a5d84bc02c3049c5b2256db77b0e"
	I1108 10:36:15.564847 1224689 cri.go:89] found id: "055c9437ada6c108a9ef6e524d0a66bf1dfcc081baabb70652559e4f149edd8d"
	I1108 10:36:15.564851 1224689 cri.go:89] found id: "01e006bfc6ddabc4f5b52b75d55b814f77b7715ec181a90987b6959c64dc9976"
	I1108 10:36:15.564854 1224689 cri.go:89] found id: "acec2edc4de9822c06eae3e3c3a9f215ef4f521d8d4f7376ca41845506b657b4"
	I1108 10:36:15.564857 1224689 cri.go:89] found id: "fa7185ae3ba9637256692faca55ed64deec71e9effbe9eebdae3f3c26cca6005"
	I1108 10:36:15.564860 1224689 cri.go:89] found id: "7e2e28dd3fc4c2eca9405df29e70031d910548f4d6fcf55d46048b375ddadca6"
	I1108 10:36:15.564866 1224689 cri.go:89] found id: "9248499b7cf3dde2e3a3d480cca7fb372cdc9053f05a33387e99151065e29b36"
	I1108 10:36:15.564869 1224689 cri.go:89] found id: "0fddc1f75d93c56972bc3e7f7b7bfa6c0c0e4208c9f791848d50f9dd3ddbeda3"
	I1108 10:36:15.564872 1224689 cri.go:89] found id: ""
	I1108 10:36:15.564928 1224689 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:36:15.582450 1224689 retry.go:31] will retry after 361.7858ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:36:15Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:36:15.945016 1224689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:36:15.960791 1224689 pause.go:52] kubelet running: false
	I1108 10:36:15.960899 1224689 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:36:16.227707 1224689 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:36:16.227802 1224689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:36:16.340094 1224689 cri.go:89] found id: "13e1625e444fc357bab28f8b30257f63116424e84a77d8a8e1251a97e1e2f759"
	I1108 10:36:16.340116 1224689 cri.go:89] found id: "3f4eafd65d1d0509a5aa57695cc1c4d02ae484f6de117480550722edeb2c155e"
	I1108 10:36:16.340121 1224689 cri.go:89] found id: "2e334bec697058bae86b58475b1c435cee36106778bc232276551557d398810c"
	I1108 10:36:16.340125 1224689 cri.go:89] found id: "d156164180806a75e51f45a02fba01ad1a09a5d84bc02c3049c5b2256db77b0e"
	I1108 10:36:16.340128 1224689 cri.go:89] found id: "055c9437ada6c108a9ef6e524d0a66bf1dfcc081baabb70652559e4f149edd8d"
	I1108 10:36:16.340132 1224689 cri.go:89] found id: "01e006bfc6ddabc4f5b52b75d55b814f77b7715ec181a90987b6959c64dc9976"
	I1108 10:36:16.340136 1224689 cri.go:89] found id: "acec2edc4de9822c06eae3e3c3a9f215ef4f521d8d4f7376ca41845506b657b4"
	I1108 10:36:16.340140 1224689 cri.go:89] found id: "fa7185ae3ba9637256692faca55ed64deec71e9effbe9eebdae3f3c26cca6005"
	I1108 10:36:16.340148 1224689 cri.go:89] found id: "7e2e28dd3fc4c2eca9405df29e70031d910548f4d6fcf55d46048b375ddadca6"
	I1108 10:36:16.340156 1224689 cri.go:89] found id: "9248499b7cf3dde2e3a3d480cca7fb372cdc9053f05a33387e99151065e29b36"
	I1108 10:36:16.340160 1224689 cri.go:89] found id: "0fddc1f75d93c56972bc3e7f7b7bfa6c0c0e4208c9f791848d50f9dd3ddbeda3"
	I1108 10:36:16.340163 1224689 cri.go:89] found id: ""
	I1108 10:36:16.340211 1224689 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:36:16.363196 1224689 out.go:203] 
	W1108 10:36:16.366156 1224689 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:36:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:36:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:36:16.366232 1224689 out.go:285] * 
	* 
	W1108 10:36:16.375981 1224689 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:36:16.379658 1224689 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-236075 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-236075
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-236075:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf",
	        "Created": "2025-11-08T10:33:26.092972115Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1219898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:35:09.618880393Z",
	            "FinishedAt": "2025-11-08T10:35:08.76544023Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/hostname",
	        "HostsPath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/hosts",
	        "LogPath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf-json.log",
	        "Name": "/default-k8s-diff-port-236075",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-236075:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-236075",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf",
	                "LowerDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-236075",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-236075/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-236075",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-236075",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-236075",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "07f71e87a632c9dc8aa452b7fef3a95b6c40b1b34ba3efe4c7453f5a0d799dc1",
	            "SandboxKey": "/var/run/docker/netns/07f71e87a632",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34527"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34528"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34531"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34529"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34530"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-236075": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:9e:d8:10:73:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38f263a32d28f326bd7caf8b4f69506dbe3e875f124d60f1d6382480728769c0",
	                    "EndpointID": "bbbf96e920d663c75da9c14bef9febce70579004139a33fb9eb2994bddcc1af6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-236075",
	                        "764db5e58d40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075: exit status 2 (429.522094ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-236075 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-236075 logs -n 25: (1.74954485s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-517657 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-517657                                                                                                                                                                                                                        │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-171136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-171136 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │ 08 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-171136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ image   │ old-k8s-version-171136 image list --format=json                                                                                                                                                                                               │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-171136 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-837698                                                                                                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-236075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-236075 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-236075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-790346 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-790346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	│ image   │ default-k8s-diff-port-236075 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ pause   │ -p default-k8s-diff-port-236075 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:35:57
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:35:57.891140 1222758 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:35:57.891259 1222758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:35:57.891271 1222758 out.go:374] Setting ErrFile to fd 2...
	I1108 10:35:57.891276 1222758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:35:57.891549 1222758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:35:57.891903 1222758 out.go:368] Setting JSON to false
	I1108 10:35:57.893278 1222758 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33503,"bootTime":1762564655,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:35:57.893386 1222758 start.go:143] virtualization:  
	I1108 10:35:57.896425 1222758 out.go:179] * [embed-certs-790346] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:35:57.899690 1222758 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:35:57.899742 1222758 notify.go:221] Checking for updates...
	I1108 10:35:57.905999 1222758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:35:57.908831 1222758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:35:57.911777 1222758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:35:57.914706 1222758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:35:57.917791 1222758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:35:57.921155 1222758 config.go:182] Loaded profile config "embed-certs-790346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:35:57.921803 1222758 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:35:57.955723 1222758 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:35:57.955842 1222758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:35:58.015549 1222758 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:35:58.005224611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:35:58.015672 1222758 docker.go:319] overlay module found
	I1108 10:35:58.018744 1222758 out.go:179] * Using the docker driver based on existing profile
	I1108 10:35:58.021736 1222758 start.go:309] selected driver: docker
	I1108 10:35:58.021767 1222758 start.go:930] validating driver "docker" against &{Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:35:58.021869 1222758 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:35:58.022661 1222758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:35:58.082191 1222758 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:35:58.072378446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:35:58.082592 1222758 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:35:58.082627 1222758 cni.go:84] Creating CNI manager for ""
	I1108 10:35:58.082693 1222758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:35:58.082745 1222758 start.go:353] cluster config:
	{Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:35:58.087706 1222758 out.go:179] * Starting "embed-certs-790346" primary control-plane node in "embed-certs-790346" cluster
	I1108 10:35:58.090580 1222758 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:35:58.093668 1222758 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:35:58.096620 1222758 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:35:58.096690 1222758 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:35:58.096721 1222758 cache.go:59] Caching tarball of preloaded images
	I1108 10:35:58.096719 1222758 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:35:58.096807 1222758 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:35:58.096818 1222758 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:35:58.096935 1222758 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/config.json ...
	I1108 10:35:58.116984 1222758 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:35:58.117008 1222758 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:35:58.117027 1222758 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:35:58.117052 1222758 start.go:360] acquireMachinesLock for embed-certs-790346: {Name:mka3c0f23b810acc7356b6e9fd36989eb99bdea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:35:58.117110 1222758 start.go:364] duration metric: took 35.773µs to acquireMachinesLock for "embed-certs-790346"
	I1108 10:35:58.117134 1222758 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:35:58.117140 1222758 fix.go:54] fixHost starting: 
	I1108 10:35:58.117405 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:35:58.136171 1222758 fix.go:112] recreateIfNeeded on embed-certs-790346: state=Stopped err=<nil>
	W1108 10:35:58.136210 1222758 fix.go:138] unexpected machine state, will restart: <nil>
	W1108 10:35:56.278228 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	W1108 10:35:58.278335 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	W1108 10:36:00.290557 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	I1108 10:36:00.778054 1219770 pod_ready.go:94] pod "coredns-66bc5c9577-x99cj" is "Ready"
	I1108 10:36:00.778085 1219770 pod_ready.go:86] duration metric: took 37.505764537s for pod "coredns-66bc5c9577-x99cj" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.780581 1219770 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.784726 1219770 pod_ready.go:94] pod "etcd-default-k8s-diff-port-236075" is "Ready"
	I1108 10:36:00.784750 1219770 pod_ready.go:86] duration metric: took 4.142079ms for pod "etcd-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.786844 1219770 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.790988 1219770 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-236075" is "Ready"
	I1108 10:36:00.791013 1219770 pod_ready.go:86] duration metric: took 4.145853ms for pod "kube-apiserver-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.793309 1219770 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.976587 1219770 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-236075" is "Ready"
	I1108 10:36:00.976618 1219770 pod_ready.go:86] duration metric: took 183.282927ms for pod "kube-controller-manager-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:01.176974 1219770 pod_ready.go:83] waiting for pod "kube-proxy-rtchk" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:01.577624 1219770 pod_ready.go:94] pod "kube-proxy-rtchk" is "Ready"
	I1108 10:36:01.577652 1219770 pod_ready.go:86] duration metric: took 400.647366ms for pod "kube-proxy-rtchk" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:01.776739 1219770 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:02.176954 1219770 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-236075" is "Ready"
	I1108 10:36:02.177002 1219770 pod_ready.go:86] duration metric: took 400.185678ms for pod "kube-scheduler-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:02.177017 1219770 pod_ready.go:40] duration metric: took 38.947041769s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:36:02.267768 1219770 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:36:02.271327 1219770 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-236075" cluster and "default" namespace by default
	I1108 10:35:58.139431 1222758 out.go:252] * Restarting existing docker container for "embed-certs-790346" ...
	I1108 10:35:58.139523 1222758 cli_runner.go:164] Run: docker start embed-certs-790346
	I1108 10:35:58.417343 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:35:58.439000 1222758 kic.go:430] container "embed-certs-790346" state is running.
	I1108 10:35:58.439382 1222758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790346
	I1108 10:35:58.465069 1222758 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/config.json ...
	I1108 10:35:58.465304 1222758 machine.go:94] provisionDockerMachine start ...
	I1108 10:35:58.465364 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:35:58.486576 1222758 main.go:143] libmachine: Using SSH client type: native
	I1108 10:35:58.487869 1222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1108 10:35:58.487889 1222758 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:35:58.488534 1222758 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49344->127.0.0.1:34532: read: connection reset by peer
	I1108 10:36:01.656540 1222758 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790346
	
	I1108 10:36:01.656575 1222758 ubuntu.go:182] provisioning hostname "embed-certs-790346"
	I1108 10:36:01.656651 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:01.676565 1222758 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:01.676914 1222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1108 10:36:01.676933 1222758 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-790346 && echo "embed-certs-790346" | sudo tee /etc/hostname
	I1108 10:36:01.848873 1222758 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790346
	
	I1108 10:36:01.848972 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:01.870189 1222758 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:01.870538 1222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1108 10:36:01.870589 1222758 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-790346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-790346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-790346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:36:02.037555 1222758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:36:02.037581 1222758 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:36:02.037599 1222758 ubuntu.go:190] setting up certificates
	I1108 10:36:02.037610 1222758 provision.go:84] configureAuth start
	I1108 10:36:02.037688 1222758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790346
	I1108 10:36:02.056479 1222758 provision.go:143] copyHostCerts
	I1108 10:36:02.056561 1222758 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:36:02.056573 1222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:36:02.056658 1222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:36:02.056815 1222758 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:36:02.056821 1222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:36:02.056867 1222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:36:02.056930 1222758 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:36:02.056935 1222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:36:02.056962 1222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:36:02.057010 1222758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.embed-certs-790346 san=[127.0.0.1 192.168.76.2 embed-certs-790346 localhost minikube]
	I1108 10:36:02.831054 1222758 provision.go:177] copyRemoteCerts
	I1108 10:36:02.831128 1222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:36:02.831175 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:02.849686 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:02.956402 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:36:02.976431 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1108 10:36:02.996115 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:36:03.016671 1222758 provision.go:87] duration metric: took 979.037697ms to configureAuth
	I1108 10:36:03.016701 1222758 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:36:03.016930 1222758 config.go:182] Loaded profile config "embed-certs-790346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:36:03.017037 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:03.034733 1222758 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:03.035048 1222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1108 10:36:03.035074 1222758 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:36:03.357284 1222758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:36:03.357366 1222758 machine.go:97] duration metric: took 4.892051853s to provisionDockerMachine
	I1108 10:36:03.357400 1222758 start.go:293] postStartSetup for "embed-certs-790346" (driver="docker")
	I1108 10:36:03.357444 1222758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:36:03.357565 1222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:36:03.357641 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:03.379627 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:03.484328 1222758 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:36:03.487797 1222758 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:36:03.487830 1222758 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:36:03.487841 1222758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:36:03.487899 1222758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:36:03.487983 1222758 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:36:03.488094 1222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:36:03.495509 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:36:03.515428 1222758 start.go:296] duration metric: took 157.995838ms for postStartSetup
	I1108 10:36:03.515529 1222758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:36:03.515599 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:03.534432 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:03.637575 1222758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:36:03.642491 1222758 fix.go:56] duration metric: took 5.525343162s for fixHost
	I1108 10:36:03.642517 1222758 start.go:83] releasing machines lock for "embed-certs-790346", held for 5.525394451s
	I1108 10:36:03.642594 1222758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790346
	I1108 10:36:03.659921 1222758 ssh_runner.go:195] Run: cat /version.json
	I1108 10:36:03.659981 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:03.660249 1222758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:36:03.660303 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:03.686279 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:03.689332 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:03.788191 1222758 ssh_runner.go:195] Run: systemctl --version
	I1108 10:36:03.906132 1222758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:36:03.954127 1222758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:36:03.959058 1222758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:36:03.959150 1222758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:36:03.968353 1222758 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:36:03.968378 1222758 start.go:496] detecting cgroup driver to use...
	I1108 10:36:03.968410 1222758 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:36:03.968523 1222758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:36:03.984049 1222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:36:03.996873 1222758 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:36:03.996988 1222758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:36:04.014109 1222758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:36:04.029137 1222758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:36:04.155915 1222758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:36:04.275773 1222758 docker.go:234] disabling docker service ...
	I1108 10:36:04.275912 1222758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:36:04.292053 1222758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:36:04.305511 1222758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:36:04.427277 1222758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:36:04.557946 1222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:36:04.571786 1222758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:36:04.587358 1222758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:36:04.587426 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.596741 1222758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:36:04.596825 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.607093 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.619113 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.628862 1222758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:36:04.638249 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.647697 1222758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.656070 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.665842 1222758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:36:04.675635 1222758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:36:04.684369 1222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:36:04.813785 1222758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:36:04.970840 1222758 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:36:04.970944 1222758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:36:04.977022 1222758 start.go:564] Will wait 60s for crictl version
	I1108 10:36:04.977132 1222758 ssh_runner.go:195] Run: which crictl
	I1108 10:36:04.981294 1222758 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:36:05.014692 1222758 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:36:05.014814 1222758 ssh_runner.go:195] Run: crio --version
	I1108 10:36:05.044009 1222758 ssh_runner.go:195] Run: crio --version
	I1108 10:36:05.079219 1222758 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:36:05.081998 1222758 cli_runner.go:164] Run: docker network inspect embed-certs-790346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:36:05.098988 1222758 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:36:05.103109 1222758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:36:05.113965 1222758 kubeadm.go:884] updating cluster {Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:36:05.114094 1222758 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:36:05.114152 1222758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:36:05.150077 1222758 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:36:05.150107 1222758 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:36:05.150162 1222758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:36:05.180307 1222758 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:36:05.180332 1222758 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:36:05.180341 1222758 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:36:05.180478 1222758 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-790346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:36:05.180563 1222758 ssh_runner.go:195] Run: crio config
	I1108 10:36:05.235950 1222758 cni.go:84] Creating CNI manager for ""
	I1108 10:36:05.235977 1222758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:36:05.236000 1222758 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:36:05.236023 1222758 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-790346 NodeName:embed-certs-790346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:36:05.236152 1222758 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-790346"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:36:05.236225 1222758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:36:05.245747 1222758 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:36:05.245869 1222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:36:05.253388 1222758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 10:36:05.265929 1222758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:36:05.277997 1222758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1108 10:36:05.291002 1222758 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:36:05.294533 1222758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:36:05.304302 1222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:36:05.426927 1222758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:36:05.449022 1222758 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346 for IP: 192.168.76.2
	I1108 10:36:05.449044 1222758 certs.go:195] generating shared ca certs ...
	I1108 10:36:05.449060 1222758 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:05.449214 1222758 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:36:05.449307 1222758 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:36:05.449320 1222758 certs.go:257] generating profile certs ...
	I1108 10:36:05.449422 1222758 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/client.key
	I1108 10:36:05.449505 1222758 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.key.f841e63b
	I1108 10:36:05.449558 1222758 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.key
	I1108 10:36:05.449678 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:36:05.449712 1222758 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:36:05.449725 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:36:05.449755 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:36:05.449781 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:36:05.449806 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:36:05.449852 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:36:05.450432 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:36:05.468070 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:36:05.485964 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:36:05.503652 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:36:05.525275 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1108 10:36:05.550516 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:36:05.572492 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:36:05.595109 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:36:05.618547 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:36:05.647747 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:36:05.670778 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:36:05.689736 1222758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:36:05.703897 1222758 ssh_runner.go:195] Run: openssl version
	I1108 10:36:05.712582 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:36:05.721990 1222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:05.725922 1222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:05.726041 1222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:05.769026 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:36:05.779153 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:36:05.787600 1222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:36:05.792095 1222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:36:05.792157 1222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:36:05.833032 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:36:05.841020 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:36:05.849263 1222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:36:05.853021 1222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:36:05.853108 1222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:36:05.896288 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:36:05.904537 1222758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:36:05.908152 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:36:05.949301 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:36:05.990831 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:36:06.032262 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:36:06.074098 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:36:06.133593 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:36:06.205086 1222758 kubeadm.go:401] StartCluster: {Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:36:06.205235 1222758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:36:06.205330 1222758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:36:06.296254 1222758 cri.go:89] found id: "ea89ad8d0eb688f083aeb7d472a94d7a3f3b2063341d0ca898c464ca703d3501"
	I1108 10:36:06.296290 1222758 cri.go:89] found id: "2edd058c6ccdbae4d8675a306904465a1fe93113e0e01793a923f585b98be4d2"
	I1108 10:36:06.296295 1222758 cri.go:89] found id: ""
	I1108 10:36:06.296364 1222758 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:36:06.309965 1222758 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:36:06Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:36:06.310178 1222758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:36:06.334105 1222758 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:36:06.334185 1222758 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:36:06.334386 1222758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:36:06.350153 1222758 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:36:06.350925 1222758 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-790346" does not appear in /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:36:06.351279 1222758 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-1027379/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-790346" cluster setting kubeconfig missing "embed-certs-790346" context setting]
	I1108 10:36:06.351919 1222758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:06.353903 1222758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:36:06.380237 1222758 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 10:36:06.380342 1222758 kubeadm.go:602] duration metric: took 46.128377ms to restartPrimaryControlPlane
	I1108 10:36:06.380390 1222758 kubeadm.go:403] duration metric: took 175.312734ms to StartCluster
	I1108 10:36:06.380426 1222758 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:06.380540 1222758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:36:06.382195 1222758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:06.382685 1222758 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:36:06.383039 1222758 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:36:06.383125 1222758 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-790346"
	I1108 10:36:06.383155 1222758 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-790346"
	W1108 10:36:06.383161 1222758 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:36:06.383199 1222758 host.go:66] Checking if "embed-certs-790346" exists ...
	I1108 10:36:06.383237 1222758 addons.go:70] Setting dashboard=true in profile "embed-certs-790346"
	I1108 10:36:06.383503 1222758 addons.go:239] Setting addon dashboard=true in "embed-certs-790346"
	W1108 10:36:06.383514 1222758 addons.go:248] addon dashboard should already be in state true
	I1108 10:36:06.383551 1222758 host.go:66] Checking if "embed-certs-790346" exists ...
	I1108 10:36:06.384162 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:36:06.384218 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:36:06.385440 1222758 addons.go:70] Setting default-storageclass=true in profile "embed-certs-790346"
	I1108 10:36:06.385491 1222758 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-790346"
	I1108 10:36:06.385951 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:36:06.405336 1222758 config.go:182] Loaded profile config "embed-certs-790346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:36:06.405551 1222758 out.go:179] * Verifying Kubernetes components...
	I1108 10:36:06.420654 1222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:36:06.427535 1222758 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:06.431322 1222758 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:36:06.431351 1222758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:36:06.431424 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:06.466563 1222758 addons.go:239] Setting addon default-storageclass=true in "embed-certs-790346"
	W1108 10:36:06.466589 1222758 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:36:06.466614 1222758 host.go:66] Checking if "embed-certs-790346" exists ...
	I1108 10:36:06.467061 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:36:06.471180 1222758 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:36:06.477008 1222758 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:36:06.481707 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:36:06.481745 1222758 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:36:06.481818 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:06.487576 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:06.518969 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:06.525600 1222758 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:36:06.525621 1222758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:36:06.525679 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:06.561764 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:06.783487 1222758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:36:06.853874 1222758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:36:06.889253 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:36:06.889329 1222758 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:36:06.989233 1222758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:36:07.004930 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:36:07.004957 1222758 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:36:07.081213 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:36:07.081241 1222758 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:36:07.141769 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:36:07.141794 1222758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:36:07.213912 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:36:07.213941 1222758 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:36:07.234266 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:36:07.234314 1222758 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:36:07.253678 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:36:07.253711 1222758 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:36:07.271431 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:36:07.271459 1222758 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:36:07.290317 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:36:07.290354 1222758 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:36:07.310937 1222758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:36:12.721375 1222758 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.867419686s)
	I1108 10:36:12.721434 1222758 node_ready.go:35] waiting up to 6m0s for node "embed-certs-790346" to be "Ready" ...
	I1108 10:36:12.721756 1222758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.732494946s)
	I1108 10:36:12.722030 1222758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.411062338s)
	I1108 10:36:12.722182 1222758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.938625332s)
	I1108 10:36:12.725107 1222758 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-790346 addons enable metrics-server
	
	I1108 10:36:12.746567 1222758 node_ready.go:49] node "embed-certs-790346" is "Ready"
	I1108 10:36:12.746645 1222758 node_ready.go:38] duration metric: took 25.18911ms for node "embed-certs-790346" to be "Ready" ...
	I1108 10:36:12.746674 1222758 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:36:12.746766 1222758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:36:12.756431 1222758 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 10:36:12.759391 1222758 addons.go:515] duration metric: took 6.376352963s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 10:36:12.761796 1222758 api_server.go:72] duration metric: took 6.379034022s to wait for apiserver process to appear ...
	I1108 10:36:12.761865 1222758 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:36:12.761899 1222758 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:36:12.770521 1222758 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:36:12.771622 1222758 api_server.go:141] control plane version: v1.34.1
	I1108 10:36:12.771650 1222758 api_server.go:131] duration metric: took 9.765163ms to wait for apiserver health ...
	I1108 10:36:12.771660 1222758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:36:12.778831 1222758 system_pods.go:59] 8 kube-system pods found
	I1108 10:36:12.778879 1222758 system_pods.go:61] "coredns-66bc5c9577-74xnp" [2be7fc7e-41f5-4dd2-bd38-28d8b7116878] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:36:12.778888 1222758 system_pods.go:61] "etcd-embed-certs-790346" [197baf26-b4ce-4eb3-a0b3-e77ae44ffc82] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:36:12.778896 1222758 system_pods.go:61] "kindnet-8978r" [ecd1e33a-2ecd-4aca-88f0-3f7c7546923d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:36:12.778905 1222758 system_pods.go:61] "kube-apiserver-embed-certs-790346" [160ec369-c7d1-415d-bd81-807e8cb09deb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:36:12.778917 1222758 system_pods.go:61] "kube-controller-manager-embed-certs-790346" [981fcf69-b2e5-4632-a888-b709045ba236] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:36:12.778940 1222758 system_pods.go:61] "kube-proxy-fx79j" [b9772cfb-4249-49a2-ab14-39aabc3dcc92] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:36:12.778954 1222758 system_pods.go:61] "kube-scheduler-embed-certs-790346" [77653d47-f56e-4a9c-b9ab-2f90a97947a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:36:12.778962 1222758 system_pods.go:61] "storage-provisioner" [30b396c5-a02e-4644-b513-31e6a6daf67b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:36:12.778968 1222758 system_pods.go:74] duration metric: took 7.280684ms to wait for pod list to return data ...
	I1108 10:36:12.778982 1222758 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:36:12.781383 1222758 default_sa.go:45] found service account: "default"
	I1108 10:36:12.781404 1222758 default_sa.go:55] duration metric: took 2.416444ms for default service account to be created ...
	I1108 10:36:12.781414 1222758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:36:12.797034 1222758 system_pods.go:86] 8 kube-system pods found
	I1108 10:36:12.797069 1222758 system_pods.go:89] "coredns-66bc5c9577-74xnp" [2be7fc7e-41f5-4dd2-bd38-28d8b7116878] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:36:12.797079 1222758 system_pods.go:89] "etcd-embed-certs-790346" [197baf26-b4ce-4eb3-a0b3-e77ae44ffc82] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:36:12.797089 1222758 system_pods.go:89] "kindnet-8978r" [ecd1e33a-2ecd-4aca-88f0-3f7c7546923d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:36:12.797098 1222758 system_pods.go:89] "kube-apiserver-embed-certs-790346" [160ec369-c7d1-415d-bd81-807e8cb09deb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:36:12.797111 1222758 system_pods.go:89] "kube-controller-manager-embed-certs-790346" [981fcf69-b2e5-4632-a888-b709045ba236] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:36:12.797118 1222758 system_pods.go:89] "kube-proxy-fx79j" [b9772cfb-4249-49a2-ab14-39aabc3dcc92] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:36:12.797124 1222758 system_pods.go:89] "kube-scheduler-embed-certs-790346" [77653d47-f56e-4a9c-b9ab-2f90a97947a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:36:12.797134 1222758 system_pods.go:89] "storage-provisioner" [30b396c5-a02e-4644-b513-31e6a6daf67b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:36:12.797142 1222758 system_pods.go:126] duration metric: took 15.72185ms to wait for k8s-apps to be running ...
	I1108 10:36:12.797156 1222758 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:36:12.797213 1222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:36:12.828596 1222758 system_svc.go:56] duration metric: took 31.429374ms WaitForService to wait for kubelet
	I1108 10:36:12.828664 1222758 kubeadm.go:587] duration metric: took 6.445905367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:36:12.828698 1222758 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:36:12.835569 1222758 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:36:12.835646 1222758 node_conditions.go:123] node cpu capacity is 2
	I1108 10:36:12.835674 1222758 node_conditions.go:105] duration metric: took 6.95184ms to run NodePressure ...
	I1108 10:36:12.835701 1222758 start.go:242] waiting for startup goroutines ...
	I1108 10:36:12.835725 1222758 start.go:247] waiting for cluster config update ...
	I1108 10:36:12.835752 1222758 start.go:256] writing updated cluster config ...
	I1108 10:36:12.836045 1222758 ssh_runner.go:195] Run: rm -f paused
	I1108 10:36:12.840224 1222758 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:36:12.847298 1222758 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-74xnp" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.586347237Z" level=info msg="Removing container: 290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b" id=1a7005f8-9680-470f-81b6-bc941655d745 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.601171955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.601481165Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5f121333e1db713432afa3493875c6dcbc253451e4a8aa25cb43f87c6881854f/merged/etc/passwd: no such file or directory"
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.601573281Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5f121333e1db713432afa3493875c6dcbc253451e4a8aa25cb43f87c6881854f/merged/etc/group: no such file or directory"
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.601899851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.605820595Z" level=info msg="Error loading conmon cgroup of container 290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b: cgroup deleted" id=1a7005f8-9680-470f-81b6-bc941655d745 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.61571217Z" level=info msg="Removed container 290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc/dashboard-metrics-scraper" id=1a7005f8-9680-470f-81b6-bc941655d745 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.623310275Z" level=info msg="Created container 13e1625e444fc357bab28f8b30257f63116424e84a77d8a8e1251a97e1e2f759: kube-system/storage-provisioner/storage-provisioner" id=d6f790dc-981c-4e77-b2e5-958337cfa035 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.625854748Z" level=info msg="Starting container: 13e1625e444fc357bab28f8b30257f63116424e84a77d8a8e1251a97e1e2f759" id=4a1f911b-7fbe-48ec-9743-f990fee05009 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.629141353Z" level=info msg="Started container" PID=1649 containerID=13e1625e444fc357bab28f8b30257f63116424e84a77d8a8e1251a97e1e2f759 description=kube-system/storage-provisioner/storage-provisioner id=4a1f911b-7fbe-48ec-9743-f990fee05009 name=/runtime.v1.RuntimeService/StartContainer sandboxID=807ad86fd364cc4d1c4a66d5b66be86b96ae6af1a46b8bf478c2fe8395f41c6b
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.810229783Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.81498763Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.815190397Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.81528207Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.820975856Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.821138829Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.821218121Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.825578597Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.825728353Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.825799669Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.82898519Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.829128981Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.829204482Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.832432381Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.832610878Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	13e1625e444fc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   807ad86fd364c       storage-provisioner                                    kube-system
	9248499b7cf3d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   1adb0fb1cb27a       dashboard-metrics-scraper-6ffb444bf9-n7tbc             kubernetes-dashboard
	0fddc1f75d93c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   f8926e0760b78       kubernetes-dashboard-855c9754f9-9bgcn                  kubernetes-dashboard
	3f4eafd65d1d0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   dfd4205e18210       kindnet-7jcpv                                          kube-system
	a1d627cb2637a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   6b7dc8155bb5c       busybox                                                default
	2e334bec69705       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   a554842c7f2c3       coredns-66bc5c9577-x99cj                               kube-system
	d156164180806       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   807ad86fd364c       storage-provisioner                                    kube-system
	055c9437ada6c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   472df16ef21d0       kube-proxy-rtchk                                       kube-system
	01e006bfc6dda       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   8eb197efe1ec4       kube-controller-manager-default-k8s-diff-port-236075   kube-system
	acec2edc4de98       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f3ee24923bd56       kube-apiserver-default-k8s-diff-port-236075            kube-system
	fa7185ae3ba96       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   168a002f9b9b3       etcd-default-k8s-diff-port-236075                      kube-system
	7e2e28dd3fc4c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   eb96e1dd0d2ad       kube-scheduler-default-k8s-diff-port-236075            kube-system
	
	
	==> coredns [2e334bec697058bae86b58475b1c435cee36106778bc232276551557d398810c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45080 - 47769 "HINFO IN 4800612627456662158.2394680252130200931. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014019801s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-236075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-236075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=default-k8s-diff-port-236075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_33_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:33:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-236075
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:36:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:36:02 +0000   Sat, 08 Nov 2025 10:33:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:36:02 +0000   Sat, 08 Nov 2025 10:33:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:36:02 +0000   Sat, 08 Nov 2025 10:33:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:36:02 +0000   Sat, 08 Nov 2025 10:34:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-236075
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                70b29cae-e7bf-4dbe-8a30-22731e1a459a
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-x99cj                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-236075                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-7jcpv                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-236075             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-236075    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-rtchk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-236075             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-n7tbc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9bgcn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m15s              kube-proxy       
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 2m23s              kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m23s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m22s              kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m22s              kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m22s              kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m19s              node-controller  Node default-k8s-diff-port-236075 event: Registered Node default-k8s-diff-port-236075 in Controller
	  Normal   NodeReady                96s                kubelet          Node default-k8s-diff-port-236075 status is now: NodeReady
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                node-controller  Node default-k8s-diff-port-236075 event: Registered Node default-k8s-diff-port-236075 in Controller
	
	
	==> dmesg <==
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[ +18.424643] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fa7185ae3ba9637256692faca55ed64deec71e9effbe9eebdae3f3c26cca6005] <==
	{"level":"warn","ts":"2025-11-08T10:35:19.337689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.369463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.379920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.408975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.422006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.455376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.474583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.503572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.527003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.558223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.587547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.603479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.668641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.680734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.731158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.739442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.764626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.793319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.815177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.854737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.883586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.912315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.943017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.985082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:20.086260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57494","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:18 up  9:18,  0 user,  load average: 4.12, 3.71, 3.04
	Linux default-k8s-diff-port-236075 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3f4eafd65d1d0509a5aa57695cc1c4d02ae484f6de117480550722edeb2c155e] <==
	I1108 10:35:23.611365       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:35:23.611614       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:35:23.611871       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:35:23.611917       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:35:23.611955       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:35:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:35:23.809260       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:35:23.809401       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:35:23.809437       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:35:23.809914       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:35:53.809535       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:35:53.810675       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:35:53.810743       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 10:35:53.810770       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1108 10:35:55.210326       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:35:55.210358       1 metrics.go:72] Registering metrics
	I1108 10:35:55.210420       1 controller.go:711] "Syncing nftables rules"
	I1108 10:36:03.809153       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:36:03.809197       1 main.go:301] handling current node
	I1108 10:36:13.809066       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:36:13.809179       1 main.go:301] handling current node
	
	
	==> kube-apiserver [acec2edc4de9822c06eae3e3c3a9f215ef4f521d8d4f7376ca41845506b657b4] <==
	I1108 10:35:21.422264       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:35:21.444193       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:35:21.449250       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:35:21.454844       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 10:35:21.455782       1 aggregator.go:171] initial CRD sync complete...
	I1108 10:35:21.455805       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:35:21.455812       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:35:21.455820       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:35:21.456212       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:35:21.456649       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:35:21.471511       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:35:21.474000       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:35:21.479439       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:35:21.479649       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:35:21.855903       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:35:22.070989       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:35:22.248992       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:35:22.431687       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:35:22.483889       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:35:22.616315       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.130.31"}
	I1108 10:35:22.633794       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.162.38"}
	I1108 10:35:24.410740       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:35:24.807170       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 10:35:24.857514       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:35:24.906394       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [01e006bfc6ddabc4f5b52b75d55b814f77b7715ec181a90987b6959c64dc9976] <==
	I1108 10:35:24.405743       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-236075"
	I1108 10:35:24.405806       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:35:24.407178       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 10:35:24.410226       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:35:24.412491       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:35:24.413740       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:35:24.431485       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:35:24.440418       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:35:24.441201       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:35:24.442779       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:35:24.445083       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:35:24.445102       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:35:24.445110       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:35:24.447131       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 10:35:24.447292       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:35:24.450762       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:35:24.451291       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:35:24.451384       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:35:24.451556       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:35:24.452387       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:35:24.452527       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 10:35:24.452594       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:35:24.452550       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:35:24.459516       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:35:24.465631       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [055c9437ada6c108a9ef6e524d0a66bf1dfcc081baabb70652559e4f149edd8d] <==
	I1108 10:35:23.380742       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:35:23.506180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:35:23.606498       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:35:23.606531       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:35:23.606600       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:35:23.727330       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:35:23.727396       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:35:23.731463       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:35:23.731962       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:35:23.732075       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:35:23.734770       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:35:23.734881       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:35:23.735221       1 config.go:200] "Starting service config controller"
	I1108 10:35:23.735268       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:35:23.735590       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:35:23.736547       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:35:23.737294       1 config.go:309] "Starting node config controller"
	I1108 10:35:23.737311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:35:23.737319       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:35:23.835888       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:35:23.835945       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 10:35:23.837457       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7e2e28dd3fc4c2eca9405df29e70031d910548f4d6fcf55d46048b375ddadca6] <==
	I1108 10:35:20.399916       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:35:22.296368       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:35:22.296398       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:35:22.311055       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:35:22.311258       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:35:22.311335       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:35:22.311399       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:35:22.312798       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:35:22.332084       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:35:22.312978       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:35:22.332733       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:35:22.412603       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:35:22.432545       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:35:22.432804       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:35:25 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:25.264171     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp9ml\" (UniqueName: \"kubernetes.io/projected/f5bee521-26ae-49f4-8fa3-942ca67f02d4-kube-api-access-zp9ml\") pod \"dashboard-metrics-scraper-6ffb444bf9-n7tbc\" (UID: \"f5bee521-26ae-49f4-8fa3-942ca67f02d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc"
	Nov 08 10:35:25 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:25.264811     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/24830468-2da1-4071-a4ca-9add3a940f75-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9bgcn\" (UID: \"24830468-2da1-4071-a4ca-9add3a940f75\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9bgcn"
	Nov 08 10:35:25 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:25.264887     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f5bee521-26ae-49f4-8fa3-942ca67f02d4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-n7tbc\" (UID: \"f5bee521-26ae-49f4-8fa3-942ca67f02d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc"
	Nov 08 10:35:25 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:25.264920     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmn7b\" (UniqueName: \"kubernetes.io/projected/24830468-2da1-4071-a4ca-9add3a940f75-kube-api-access-jmn7b\") pod \"kubernetes-dashboard-855c9754f9-9bgcn\" (UID: \"24830468-2da1-4071-a4ca-9add3a940f75\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9bgcn"
	Nov 08 10:35:25 default-k8s-diff-port-236075 kubelet[778]: W1108 10:35:25.526698     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/crio-1adb0fb1cb27ae208f0fef6069f3ecf85ea9af7c7f32d5ba48cb74e91d5a425f WatchSource:0}: Error finding container 1adb0fb1cb27ae208f0fef6069f3ecf85ea9af7c7f32d5ba48cb74e91d5a425f: Status 404 returned error can't find the container with id 1adb0fb1cb27ae208f0fef6069f3ecf85ea9af7c7f32d5ba48cb74e91d5a425f
	Nov 08 10:35:30 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:30.477880     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 10:35:32 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:32.530561     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9bgcn" podStartSLOduration=3.446937956 podStartE2EDuration="7.530543318s" podCreationTimestamp="2025-11-08 10:35:25 +0000 UTC" firstStartedPulling="2025-11-08 10:35:25.500348128 +0000 UTC m=+9.347475645" lastFinishedPulling="2025-11-08 10:35:29.583953415 +0000 UTC m=+13.431081007" observedRunningTime="2025-11-08 10:35:30.519667564 +0000 UTC m=+14.366795090" watchObservedRunningTime="2025-11-08 10:35:32.530543318 +0000 UTC m=+16.377670844"
	Nov 08 10:35:34 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:34.519160     778 scope.go:117] "RemoveContainer" containerID="bf32856aa58731d3493493a604cf603921c42150a50ee021f1215be12a4bfda8"
	Nov 08 10:35:35 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:35.522919     778 scope.go:117] "RemoveContainer" containerID="bf32856aa58731d3493493a604cf603921c42150a50ee021f1215be12a4bfda8"
	Nov 08 10:35:35 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:35.523197     778 scope.go:117] "RemoveContainer" containerID="290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b"
	Nov 08 10:35:35 default-k8s-diff-port-236075 kubelet[778]: E1108 10:35:35.523343     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n7tbc_kubernetes-dashboard(f5bee521-26ae-49f4-8fa3-942ca67f02d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc" podUID="f5bee521-26ae-49f4-8fa3-942ca67f02d4"
	Nov 08 10:35:36 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:36.527161     778 scope.go:117] "RemoveContainer" containerID="290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b"
	Nov 08 10:35:36 default-k8s-diff-port-236075 kubelet[778]: E1108 10:35:36.527329     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n7tbc_kubernetes-dashboard(f5bee521-26ae-49f4-8fa3-942ca67f02d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc" podUID="f5bee521-26ae-49f4-8fa3-942ca67f02d4"
	Nov 08 10:35:40 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:40.875090     778 scope.go:117] "RemoveContainer" containerID="290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b"
	Nov 08 10:35:40 default-k8s-diff-port-236075 kubelet[778]: E1108 10:35:40.875285     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n7tbc_kubernetes-dashboard(f5bee521-26ae-49f4-8fa3-942ca67f02d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc" podUID="f5bee521-26ae-49f4-8fa3-942ca67f02d4"
	Nov 08 10:35:53 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:53.339236     778 scope.go:117] "RemoveContainer" containerID="290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b"
	Nov 08 10:35:53 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:53.571639     778 scope.go:117] "RemoveContainer" containerID="d156164180806a75e51f45a02fba01ad1a09a5d84bc02c3049c5b2256db77b0e"
	Nov 08 10:35:53 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:53.582136     778 scope.go:117] "RemoveContainer" containerID="290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b"
	Nov 08 10:35:53 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:53.582537     778 scope.go:117] "RemoveContainer" containerID="9248499b7cf3dde2e3a3d480cca7fb372cdc9053f05a33387e99151065e29b36"
	Nov 08 10:35:53 default-k8s-diff-port-236075 kubelet[778]: E1108 10:35:53.583711     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n7tbc_kubernetes-dashboard(f5bee521-26ae-49f4-8fa3-942ca67f02d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc" podUID="f5bee521-26ae-49f4-8fa3-942ca67f02d4"
	Nov 08 10:36:00 default-k8s-diff-port-236075 kubelet[778]: I1108 10:36:00.875189     778 scope.go:117] "RemoveContainer" containerID="9248499b7cf3dde2e3a3d480cca7fb372cdc9053f05a33387e99151065e29b36"
	Nov 08 10:36:00 default-k8s-diff-port-236075 kubelet[778]: E1108 10:36:00.875368     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n7tbc_kubernetes-dashboard(f5bee521-26ae-49f4-8fa3-942ca67f02d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc" podUID="f5bee521-26ae-49f4-8fa3-942ca67f02d4"
	Nov 08 10:36:14 default-k8s-diff-port-236075 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:36:14 default-k8s-diff-port-236075 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:36:14 default-k8s-diff-port-236075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0fddc1f75d93c56972bc3e7f7b7bfa6c0c0e4208c9f791848d50f9dd3ddbeda3] <==
	2025/11/08 10:35:29 Using namespace: kubernetes-dashboard
	2025/11/08 10:35:29 Using in-cluster config to connect to apiserver
	2025/11/08 10:35:29 Using secret token for csrf signing
	2025/11/08 10:35:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:35:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:35:29 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:35:29 Generating JWE encryption key
	2025/11/08 10:35:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:35:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:35:30 Initializing JWE encryption key from synchronized object
	2025/11/08 10:35:30 Creating in-cluster Sidecar client
	2025/11/08 10:35:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:35:30 Serving insecurely on HTTP port: 9090
	2025/11/08 10:36:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:35:29 Starting overwatch
	
	
	==> storage-provisioner [13e1625e444fc357bab28f8b30257f63116424e84a77d8a8e1251a97e1e2f759] <==
	I1108 10:35:53.644865       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:35:53.657717       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:35:53.657838       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:35:53.661167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:57.117653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:01.377890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:04.976196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:08.029401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:11.052118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:11.058147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:36:11.058372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:36:11.058583       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-236075_31f9469e-c19d-4f5b-bded-69b9e4adc434!
	I1108 10:36:11.061870       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee54a8f0-7b96-489a-b394-63ad7711ea02", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-236075_31f9469e-c19d-4f5b-bded-69b9e4adc434 became leader
	W1108 10:36:11.079818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:11.088241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:36:11.161524       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-236075_31f9469e-c19d-4f5b-bded-69b9e4adc434!
	W1108 10:36:13.091257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:13.096234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:15.100307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:15.107423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:17.111021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:17.117656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d156164180806a75e51f45a02fba01ad1a09a5d84bc02c3049c5b2256db77b0e] <==
	I1108 10:35:23.301627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:35:53.303842       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075: exit status 2 (531.052991ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-236075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-236075
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-236075:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf",
	        "Created": "2025-11-08T10:33:26.092972115Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1219898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:35:09.618880393Z",
	            "FinishedAt": "2025-11-08T10:35:08.76544023Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/hostname",
	        "HostsPath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/hosts",
	        "LogPath": "/var/lib/docker/containers/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf-json.log",
	        "Name": "/default-k8s-diff-port-236075",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-236075:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-236075",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf",
	                "LowerDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04dd3632e35617aa66b1bf0632bc25953c160eaed5f6a1b822f02d32f61a4063/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-236075",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-236075/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-236075",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-236075",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-236075",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "07f71e87a632c9dc8aa452b7fef3a95b6c40b1b34ba3efe4c7453f5a0d799dc1",
	            "SandboxKey": "/var/run/docker/netns/07f71e87a632",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34527"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34528"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34531"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34529"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34530"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-236075": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:9e:d8:10:73:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38f263a32d28f326bd7caf8b4f69506dbe3e875f124d60f1d6382480728769c0",
	                    "EndpointID": "bbbf96e920d663c75da9c14bef9febce70579004139a33fb9eb2994bddcc1af6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-236075",
	                        "764db5e58d40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075: exit status 2 (520.546797ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-236075 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-236075 logs -n 25: (1.730971373s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-517657 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-517657                                                                                                                                                                                                                        │ cert-options-517657          │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:30 UTC │ 08 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-171136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-171136 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:31 UTC │ 08 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-171136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ image   │ old-k8s-version-171136 image list --format=json                                                                                                                                                                                               │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-171136 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-837698                                                                                                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-236075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-236075 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-236075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-790346 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-790346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	│ image   │ default-k8s-diff-port-236075 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ pause   │ -p default-k8s-diff-port-236075 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:35:57
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:35:57.891140 1222758 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:35:57.891259 1222758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:35:57.891271 1222758 out.go:374] Setting ErrFile to fd 2...
	I1108 10:35:57.891276 1222758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:35:57.891549 1222758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:35:57.891903 1222758 out.go:368] Setting JSON to false
	I1108 10:35:57.893278 1222758 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33503,"bootTime":1762564655,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:35:57.893386 1222758 start.go:143] virtualization:  
	I1108 10:35:57.896425 1222758 out.go:179] * [embed-certs-790346] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:35:57.899690 1222758 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:35:57.899742 1222758 notify.go:221] Checking for updates...
	I1108 10:35:57.905999 1222758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:35:57.908831 1222758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:35:57.911777 1222758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:35:57.914706 1222758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:35:57.917791 1222758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:35:57.921155 1222758 config.go:182] Loaded profile config "embed-certs-790346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:35:57.921803 1222758 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:35:57.955723 1222758 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:35:57.955842 1222758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:35:58.015549 1222758 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:35:58.005224611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:35:58.015672 1222758 docker.go:319] overlay module found
	I1108 10:35:58.018744 1222758 out.go:179] * Using the docker driver based on existing profile
	I1108 10:35:58.021736 1222758 start.go:309] selected driver: docker
	I1108 10:35:58.021767 1222758 start.go:930] validating driver "docker" against &{Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:35:58.021869 1222758 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:35:58.022661 1222758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:35:58.082191 1222758 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:35:58.072378446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:35:58.082592 1222758 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:35:58.082627 1222758 cni.go:84] Creating CNI manager for ""
	I1108 10:35:58.082693 1222758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:35:58.082745 1222758 start.go:353] cluster config:
	{Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:35:58.087706 1222758 out.go:179] * Starting "embed-certs-790346" primary control-plane node in "embed-certs-790346" cluster
	I1108 10:35:58.090580 1222758 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:35:58.093668 1222758 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:35:58.096620 1222758 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:35:58.096690 1222758 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:35:58.096721 1222758 cache.go:59] Caching tarball of preloaded images
	I1108 10:35:58.096719 1222758 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:35:58.096807 1222758 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:35:58.096818 1222758 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:35:58.096935 1222758 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/config.json ...
	I1108 10:35:58.116984 1222758 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:35:58.117008 1222758 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:35:58.117027 1222758 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:35:58.117052 1222758 start.go:360] acquireMachinesLock for embed-certs-790346: {Name:mka3c0f23b810acc7356b6e9fd36989eb99bdea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:35:58.117110 1222758 start.go:364] duration metric: took 35.773µs to acquireMachinesLock for "embed-certs-790346"
	I1108 10:35:58.117134 1222758 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:35:58.117140 1222758 fix.go:54] fixHost starting: 
	I1108 10:35:58.117405 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:35:58.136171 1222758 fix.go:112] recreateIfNeeded on embed-certs-790346: state=Stopped err=<nil>
	W1108 10:35:58.136210 1222758 fix.go:138] unexpected machine state, will restart: <nil>
	W1108 10:35:56.278228 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	W1108 10:35:58.278335 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	W1108 10:36:00.290557 1219770 pod_ready.go:104] pod "coredns-66bc5c9577-x99cj" is not "Ready", error: <nil>
	I1108 10:36:00.778054 1219770 pod_ready.go:94] pod "coredns-66bc5c9577-x99cj" is "Ready"
	I1108 10:36:00.778085 1219770 pod_ready.go:86] duration metric: took 37.505764537s for pod "coredns-66bc5c9577-x99cj" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.780581 1219770 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.784726 1219770 pod_ready.go:94] pod "etcd-default-k8s-diff-port-236075" is "Ready"
	I1108 10:36:00.784750 1219770 pod_ready.go:86] duration metric: took 4.142079ms for pod "etcd-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.786844 1219770 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.790988 1219770 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-236075" is "Ready"
	I1108 10:36:00.791013 1219770 pod_ready.go:86] duration metric: took 4.145853ms for pod "kube-apiserver-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.793309 1219770 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:00.976587 1219770 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-236075" is "Ready"
	I1108 10:36:00.976618 1219770 pod_ready.go:86] duration metric: took 183.282927ms for pod "kube-controller-manager-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:01.176974 1219770 pod_ready.go:83] waiting for pod "kube-proxy-rtchk" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:01.577624 1219770 pod_ready.go:94] pod "kube-proxy-rtchk" is "Ready"
	I1108 10:36:01.577652 1219770 pod_ready.go:86] duration metric: took 400.647366ms for pod "kube-proxy-rtchk" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:01.776739 1219770 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:02.176954 1219770 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-236075" is "Ready"
	I1108 10:36:02.177002 1219770 pod_ready.go:86] duration metric: took 400.185678ms for pod "kube-scheduler-default-k8s-diff-port-236075" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:02.177017 1219770 pod_ready.go:40] duration metric: took 38.947041769s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:36:02.267768 1219770 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:36:02.271327 1219770 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-236075" cluster and "default" namespace by default
	I1108 10:35:58.139431 1222758 out.go:252] * Restarting existing docker container for "embed-certs-790346" ...
	I1108 10:35:58.139523 1222758 cli_runner.go:164] Run: docker start embed-certs-790346
	I1108 10:35:58.417343 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:35:58.439000 1222758 kic.go:430] container "embed-certs-790346" state is running.
	I1108 10:35:58.439382 1222758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790346
	I1108 10:35:58.465069 1222758 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/config.json ...
	I1108 10:35:58.465304 1222758 machine.go:94] provisionDockerMachine start ...
	I1108 10:35:58.465364 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:35:58.486576 1222758 main.go:143] libmachine: Using SSH client type: native
	I1108 10:35:58.487869 1222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1108 10:35:58.487889 1222758 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:35:58.488534 1222758 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49344->127.0.0.1:34532: read: connection reset by peer
	I1108 10:36:01.656540 1222758 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790346
	
	I1108 10:36:01.656575 1222758 ubuntu.go:182] provisioning hostname "embed-certs-790346"
	I1108 10:36:01.656651 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:01.676565 1222758 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:01.676914 1222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1108 10:36:01.676933 1222758 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-790346 && echo "embed-certs-790346" | sudo tee /etc/hostname
	I1108 10:36:01.848873 1222758 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790346
	
	I1108 10:36:01.848972 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:01.870189 1222758 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:01.870538 1222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1108 10:36:01.870589 1222758 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-790346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-790346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-790346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:36:02.037555 1222758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:36:02.037581 1222758 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:36:02.037599 1222758 ubuntu.go:190] setting up certificates
	I1108 10:36:02.037610 1222758 provision.go:84] configureAuth start
	I1108 10:36:02.037688 1222758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790346
	I1108 10:36:02.056479 1222758 provision.go:143] copyHostCerts
	I1108 10:36:02.056561 1222758 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:36:02.056573 1222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:36:02.056658 1222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:36:02.056815 1222758 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:36:02.056821 1222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:36:02.056867 1222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:36:02.056930 1222758 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:36:02.056935 1222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:36:02.056962 1222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:36:02.057010 1222758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.embed-certs-790346 san=[127.0.0.1 192.168.76.2 embed-certs-790346 localhost minikube]
	I1108 10:36:02.831054 1222758 provision.go:177] copyRemoteCerts
	I1108 10:36:02.831128 1222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:36:02.831175 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:02.849686 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:02.956402 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:36:02.976431 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1108 10:36:02.996115 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:36:03.016671 1222758 provision.go:87] duration metric: took 979.037697ms to configureAuth
	I1108 10:36:03.016701 1222758 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:36:03.016930 1222758 config.go:182] Loaded profile config "embed-certs-790346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:36:03.017037 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:03.034733 1222758 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:03.035048 1222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1108 10:36:03.035074 1222758 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:36:03.357284 1222758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:36:03.357366 1222758 machine.go:97] duration metric: took 4.892051853s to provisionDockerMachine
	I1108 10:36:03.357400 1222758 start.go:293] postStartSetup for "embed-certs-790346" (driver="docker")
	I1108 10:36:03.357444 1222758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:36:03.357565 1222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:36:03.357641 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:03.379627 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:03.484328 1222758 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:36:03.487797 1222758 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:36:03.487830 1222758 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:36:03.487841 1222758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:36:03.487899 1222758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:36:03.487983 1222758 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:36:03.488094 1222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:36:03.495509 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:36:03.515428 1222758 start.go:296] duration metric: took 157.995838ms for postStartSetup
	I1108 10:36:03.515529 1222758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:36:03.515599 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:03.534432 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:03.637575 1222758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:36:03.642491 1222758 fix.go:56] duration metric: took 5.525343162s for fixHost
	I1108 10:36:03.642517 1222758 start.go:83] releasing machines lock for "embed-certs-790346", held for 5.525394451s
	I1108 10:36:03.642594 1222758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790346
	I1108 10:36:03.659921 1222758 ssh_runner.go:195] Run: cat /version.json
	I1108 10:36:03.659981 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:03.660249 1222758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:36:03.660303 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:03.686279 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:03.689332 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:03.788191 1222758 ssh_runner.go:195] Run: systemctl --version
	I1108 10:36:03.906132 1222758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:36:03.954127 1222758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:36:03.959058 1222758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:36:03.959150 1222758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:36:03.968353 1222758 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:36:03.968378 1222758 start.go:496] detecting cgroup driver to use...
	I1108 10:36:03.968410 1222758 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:36:03.968523 1222758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:36:03.984049 1222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:36:03.996873 1222758 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:36:03.996988 1222758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:36:04.014109 1222758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:36:04.029137 1222758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:36:04.155915 1222758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:36:04.275773 1222758 docker.go:234] disabling docker service ...
	I1108 10:36:04.275912 1222758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:36:04.292053 1222758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:36:04.305511 1222758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:36:04.427277 1222758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:36:04.557946 1222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:36:04.571786 1222758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:36:04.587358 1222758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:36:04.587426 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.596741 1222758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:36:04.596825 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.607093 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.619113 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.628862 1222758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:36:04.638249 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.647697 1222758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.656070 1222758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:04.665842 1222758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:36:04.675635 1222758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:36:04.684369 1222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:36:04.813785 1222758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:36:04.970840 1222758 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:36:04.970944 1222758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:36:04.977022 1222758 start.go:564] Will wait 60s for crictl version
	I1108 10:36:04.977132 1222758 ssh_runner.go:195] Run: which crictl
	I1108 10:36:04.981294 1222758 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:36:05.014692 1222758 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:36:05.014814 1222758 ssh_runner.go:195] Run: crio --version
	I1108 10:36:05.044009 1222758 ssh_runner.go:195] Run: crio --version
	I1108 10:36:05.079219 1222758 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:36:05.081998 1222758 cli_runner.go:164] Run: docker network inspect embed-certs-790346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:36:05.098988 1222758 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:36:05.103109 1222758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:36:05.113965 1222758 kubeadm.go:884] updating cluster {Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:36:05.114094 1222758 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:36:05.114152 1222758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:36:05.150077 1222758 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:36:05.150107 1222758 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:36:05.150162 1222758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:36:05.180307 1222758 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:36:05.180332 1222758 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:36:05.180341 1222758 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:36:05.180478 1222758 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-790346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:36:05.180563 1222758 ssh_runner.go:195] Run: crio config
	I1108 10:36:05.235950 1222758 cni.go:84] Creating CNI manager for ""
	I1108 10:36:05.235977 1222758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:36:05.236000 1222758 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:36:05.236023 1222758 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-790346 NodeName:embed-certs-790346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:36:05.236152 1222758 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-790346"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:36:05.236225 1222758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:36:05.245747 1222758 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:36:05.245869 1222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:36:05.253388 1222758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 10:36:05.265929 1222758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:36:05.277997 1222758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1108 10:36:05.291002 1222758 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:36:05.294533 1222758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:36:05.304302 1222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:36:05.426927 1222758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:36:05.449022 1222758 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346 for IP: 192.168.76.2
	I1108 10:36:05.449044 1222758 certs.go:195] generating shared ca certs ...
	I1108 10:36:05.449060 1222758 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:05.449214 1222758 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:36:05.449307 1222758 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:36:05.449320 1222758 certs.go:257] generating profile certs ...
	I1108 10:36:05.449422 1222758 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/client.key
	I1108 10:36:05.449505 1222758 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.key.f841e63b
	I1108 10:36:05.449558 1222758 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.key
	I1108 10:36:05.449678 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:36:05.449712 1222758 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:36:05.449725 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:36:05.449755 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:36:05.449781 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:36:05.449806 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:36:05.449852 1222758 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:36:05.450432 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:36:05.468070 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:36:05.485964 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:36:05.503652 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:36:05.525275 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1108 10:36:05.550516 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:36:05.572492 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:36:05.595109 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/embed-certs-790346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:36:05.618547 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:36:05.647747 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:36:05.670778 1222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:36:05.689736 1222758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:36:05.703897 1222758 ssh_runner.go:195] Run: openssl version
	I1108 10:36:05.712582 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:36:05.721990 1222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:05.725922 1222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:05.726041 1222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:05.769026 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:36:05.779153 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:36:05.787600 1222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:36:05.792095 1222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:36:05.792157 1222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:36:05.833032 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:36:05.841020 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:36:05.849263 1222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:36:05.853021 1222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:36:05.853108 1222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:36:05.896288 1222758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:36:05.904537 1222758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:36:05.908152 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:36:05.949301 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:36:05.990831 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:36:06.032262 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:36:06.074098 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:36:06.133593 1222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:36:06.205086 1222758 kubeadm.go:401] StartCluster: {Name:embed-certs-790346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:36:06.205235 1222758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:36:06.205330 1222758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:36:06.296254 1222758 cri.go:89] found id: "ea89ad8d0eb688f083aeb7d472a94d7a3f3b2063341d0ca898c464ca703d3501"
	I1108 10:36:06.296290 1222758 cri.go:89] found id: "2edd058c6ccdbae4d8675a306904465a1fe93113e0e01793a923f585b98be4d2"
	I1108 10:36:06.296295 1222758 cri.go:89] found id: ""
	I1108 10:36:06.296364 1222758 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:36:06.309965 1222758 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:36:06Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:36:06.310178 1222758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:36:06.334105 1222758 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:36:06.334185 1222758 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:36:06.334386 1222758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:36:06.350153 1222758 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:36:06.350925 1222758 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-790346" does not appear in /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:36:06.351279 1222758 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-1027379/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-790346" cluster setting kubeconfig missing "embed-certs-790346" context setting]
	I1108 10:36:06.351919 1222758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:06.353903 1222758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:36:06.380237 1222758 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 10:36:06.380342 1222758 kubeadm.go:602] duration metric: took 46.128377ms to restartPrimaryControlPlane
	I1108 10:36:06.380390 1222758 kubeadm.go:403] duration metric: took 175.312734ms to StartCluster
	I1108 10:36:06.380426 1222758 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:06.380540 1222758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:36:06.382195 1222758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:06.382685 1222758 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:36:06.383039 1222758 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:36:06.383125 1222758 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-790346"
	I1108 10:36:06.383155 1222758 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-790346"
	W1108 10:36:06.383161 1222758 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:36:06.383199 1222758 host.go:66] Checking if "embed-certs-790346" exists ...
	I1108 10:36:06.383237 1222758 addons.go:70] Setting dashboard=true in profile "embed-certs-790346"
	I1108 10:36:06.383503 1222758 addons.go:239] Setting addon dashboard=true in "embed-certs-790346"
	W1108 10:36:06.383514 1222758 addons.go:248] addon dashboard should already be in state true
	I1108 10:36:06.383551 1222758 host.go:66] Checking if "embed-certs-790346" exists ...
	I1108 10:36:06.384162 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:36:06.384218 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:36:06.385440 1222758 addons.go:70] Setting default-storageclass=true in profile "embed-certs-790346"
	I1108 10:36:06.385491 1222758 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-790346"
	I1108 10:36:06.385951 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:36:06.405336 1222758 config.go:182] Loaded profile config "embed-certs-790346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:36:06.405551 1222758 out.go:179] * Verifying Kubernetes components...
	I1108 10:36:06.420654 1222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:36:06.427535 1222758 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:06.431322 1222758 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:36:06.431351 1222758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:36:06.431424 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:06.466563 1222758 addons.go:239] Setting addon default-storageclass=true in "embed-certs-790346"
	W1108 10:36:06.466589 1222758 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:36:06.466614 1222758 host.go:66] Checking if "embed-certs-790346" exists ...
	I1108 10:36:06.467061 1222758 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:36:06.471180 1222758 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:36:06.477008 1222758 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:36:06.481707 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:36:06.481745 1222758 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:36:06.481818 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:06.487576 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:06.518969 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:06.525600 1222758 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:36:06.525621 1222758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:36:06.525679 1222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:36:06.561764 1222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:36:06.783487 1222758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:36:06.853874 1222758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:36:06.889253 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:36:06.889329 1222758 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:36:06.989233 1222758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:36:07.004930 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:36:07.004957 1222758 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:36:07.081213 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:36:07.081241 1222758 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:36:07.141769 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:36:07.141794 1222758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:36:07.213912 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:36:07.213941 1222758 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:36:07.234266 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:36:07.234314 1222758 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:36:07.253678 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:36:07.253711 1222758 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:36:07.271431 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:36:07.271459 1222758 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:36:07.290317 1222758 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:36:07.290354 1222758 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:36:07.310937 1222758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:36:12.721375 1222758 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.867419686s)
	I1108 10:36:12.721434 1222758 node_ready.go:35] waiting up to 6m0s for node "embed-certs-790346" to be "Ready" ...
	I1108 10:36:12.721756 1222758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.732494946s)
	I1108 10:36:12.722030 1222758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.411062338s)
	I1108 10:36:12.722182 1222758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.938625332s)
	I1108 10:36:12.725107 1222758 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-790346 addons enable metrics-server
	
	I1108 10:36:12.746567 1222758 node_ready.go:49] node "embed-certs-790346" is "Ready"
	I1108 10:36:12.746645 1222758 node_ready.go:38] duration metric: took 25.18911ms for node "embed-certs-790346" to be "Ready" ...
	I1108 10:36:12.746674 1222758 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:36:12.746766 1222758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:36:12.756431 1222758 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 10:36:12.759391 1222758 addons.go:515] duration metric: took 6.376352963s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 10:36:12.761796 1222758 api_server.go:72] duration metric: took 6.379034022s to wait for apiserver process to appear ...
	I1108 10:36:12.761865 1222758 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:36:12.761899 1222758 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:36:12.770521 1222758 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:36:12.771622 1222758 api_server.go:141] control plane version: v1.34.1
	I1108 10:36:12.771650 1222758 api_server.go:131] duration metric: took 9.765163ms to wait for apiserver health ...
	I1108 10:36:12.771660 1222758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:36:12.778831 1222758 system_pods.go:59] 8 kube-system pods found
	I1108 10:36:12.778879 1222758 system_pods.go:61] "coredns-66bc5c9577-74xnp" [2be7fc7e-41f5-4dd2-bd38-28d8b7116878] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:36:12.778888 1222758 system_pods.go:61] "etcd-embed-certs-790346" [197baf26-b4ce-4eb3-a0b3-e77ae44ffc82] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:36:12.778896 1222758 system_pods.go:61] "kindnet-8978r" [ecd1e33a-2ecd-4aca-88f0-3f7c7546923d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:36:12.778905 1222758 system_pods.go:61] "kube-apiserver-embed-certs-790346" [160ec369-c7d1-415d-bd81-807e8cb09deb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:36:12.778917 1222758 system_pods.go:61] "kube-controller-manager-embed-certs-790346" [981fcf69-b2e5-4632-a888-b709045ba236] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:36:12.778940 1222758 system_pods.go:61] "kube-proxy-fx79j" [b9772cfb-4249-49a2-ab14-39aabc3dcc92] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:36:12.778954 1222758 system_pods.go:61] "kube-scheduler-embed-certs-790346" [77653d47-f56e-4a9c-b9ab-2f90a97947a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:36:12.778962 1222758 system_pods.go:61] "storage-provisioner" [30b396c5-a02e-4644-b513-31e6a6daf67b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:36:12.778968 1222758 system_pods.go:74] duration metric: took 7.280684ms to wait for pod list to return data ...
	I1108 10:36:12.778982 1222758 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:36:12.781383 1222758 default_sa.go:45] found service account: "default"
	I1108 10:36:12.781404 1222758 default_sa.go:55] duration metric: took 2.416444ms for default service account to be created ...
	I1108 10:36:12.781414 1222758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:36:12.797034 1222758 system_pods.go:86] 8 kube-system pods found
	I1108 10:36:12.797069 1222758 system_pods.go:89] "coredns-66bc5c9577-74xnp" [2be7fc7e-41f5-4dd2-bd38-28d8b7116878] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:36:12.797079 1222758 system_pods.go:89] "etcd-embed-certs-790346" [197baf26-b4ce-4eb3-a0b3-e77ae44ffc82] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:36:12.797089 1222758 system_pods.go:89] "kindnet-8978r" [ecd1e33a-2ecd-4aca-88f0-3f7c7546923d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:36:12.797098 1222758 system_pods.go:89] "kube-apiserver-embed-certs-790346" [160ec369-c7d1-415d-bd81-807e8cb09deb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:36:12.797111 1222758 system_pods.go:89] "kube-controller-manager-embed-certs-790346" [981fcf69-b2e5-4632-a888-b709045ba236] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:36:12.797118 1222758 system_pods.go:89] "kube-proxy-fx79j" [b9772cfb-4249-49a2-ab14-39aabc3dcc92] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:36:12.797124 1222758 system_pods.go:89] "kube-scheduler-embed-certs-790346" [77653d47-f56e-4a9c-b9ab-2f90a97947a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:36:12.797134 1222758 system_pods.go:89] "storage-provisioner" [30b396c5-a02e-4644-b513-31e6a6daf67b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:36:12.797142 1222758 system_pods.go:126] duration metric: took 15.72185ms to wait for k8s-apps to be running ...
	I1108 10:36:12.797156 1222758 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:36:12.797213 1222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:36:12.828596 1222758 system_svc.go:56] duration metric: took 31.429374ms WaitForService to wait for kubelet
	I1108 10:36:12.828664 1222758 kubeadm.go:587] duration metric: took 6.445905367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:36:12.828698 1222758 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:36:12.835569 1222758 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:36:12.835646 1222758 node_conditions.go:123] node cpu capacity is 2
	I1108 10:36:12.835674 1222758 node_conditions.go:105] duration metric: took 6.95184ms to run NodePressure ...
	I1108 10:36:12.835701 1222758 start.go:242] waiting for startup goroutines ...
	I1108 10:36:12.835725 1222758 start.go:247] waiting for cluster config update ...
	I1108 10:36:12.835752 1222758 start.go:256] writing updated cluster config ...
	I1108 10:36:12.836045 1222758 ssh_runner.go:195] Run: rm -f paused
	I1108 10:36:12.840224 1222758 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:36:12.847298 1222758 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-74xnp" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 10:36:14.854141 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:16.866247 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.586347237Z" level=info msg="Removing container: 290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b" id=1a7005f8-9680-470f-81b6-bc941655d745 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.601171955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.601481165Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5f121333e1db713432afa3493875c6dcbc253451e4a8aa25cb43f87c6881854f/merged/etc/passwd: no such file or directory"
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.601573281Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5f121333e1db713432afa3493875c6dcbc253451e4a8aa25cb43f87c6881854f/merged/etc/group: no such file or directory"
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.601899851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.605820595Z" level=info msg="Error loading conmon cgroup of container 290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b: cgroup deleted" id=1a7005f8-9680-470f-81b6-bc941655d745 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.61571217Z" level=info msg="Removed container 290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc/dashboard-metrics-scraper" id=1a7005f8-9680-470f-81b6-bc941655d745 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.623310275Z" level=info msg="Created container 13e1625e444fc357bab28f8b30257f63116424e84a77d8a8e1251a97e1e2f759: kube-system/storage-provisioner/storage-provisioner" id=d6f790dc-981c-4e77-b2e5-958337cfa035 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.625854748Z" level=info msg="Starting container: 13e1625e444fc357bab28f8b30257f63116424e84a77d8a8e1251a97e1e2f759" id=4a1f911b-7fbe-48ec-9743-f990fee05009 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:35:53 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:35:53.629141353Z" level=info msg="Started container" PID=1649 containerID=13e1625e444fc357bab28f8b30257f63116424e84a77d8a8e1251a97e1e2f759 description=kube-system/storage-provisioner/storage-provisioner id=4a1f911b-7fbe-48ec-9743-f990fee05009 name=/runtime.v1.RuntimeService/StartContainer sandboxID=807ad86fd364cc4d1c4a66d5b66be86b96ae6af1a46b8bf478c2fe8395f41c6b
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.810229783Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.81498763Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.815190397Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.81528207Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.820975856Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.821138829Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.821218121Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.825578597Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.825728353Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.825799669Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.82898519Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.829128981Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.829204482Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.832432381Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:03 default-k8s-diff-port-236075 crio[651]: time="2025-11-08T10:36:03.832610878Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	13e1625e444fc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   807ad86fd364c       storage-provisioner                                    kube-system
	9248499b7cf3d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago       Exited              dashboard-metrics-scraper   2                   1adb0fb1cb27a       dashboard-metrics-scraper-6ffb444bf9-n7tbc             kubernetes-dashboard
	0fddc1f75d93c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   51 seconds ago       Running             kubernetes-dashboard        0                   f8926e0760b78       kubernetes-dashboard-855c9754f9-9bgcn                  kubernetes-dashboard
	3f4eafd65d1d0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   dfd4205e18210       kindnet-7jcpv                                          kube-system
	a1d627cb2637a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   6b7dc8155bb5c       busybox                                                default
	2e334bec69705       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   a554842c7f2c3       coredns-66bc5c9577-x99cj                               kube-system
	d156164180806       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   807ad86fd364c       storage-provisioner                                    kube-system
	055c9437ada6c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   472df16ef21d0       kube-proxy-rtchk                                       kube-system
	01e006bfc6dda       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   8eb197efe1ec4       kube-controller-manager-default-k8s-diff-port-236075   kube-system
	acec2edc4de98       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f3ee24923bd56       kube-apiserver-default-k8s-diff-port-236075            kube-system
	fa7185ae3ba96       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   168a002f9b9b3       etcd-default-k8s-diff-port-236075                      kube-system
	7e2e28dd3fc4c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   eb96e1dd0d2ad       kube-scheduler-default-k8s-diff-port-236075            kube-system
	
	
	==> coredns [2e334bec697058bae86b58475b1c435cee36106778bc232276551557d398810c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45080 - 47769 "HINFO IN 4800612627456662158.2394680252130200931. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014019801s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-236075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-236075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=default-k8s-diff-port-236075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_33_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:33:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-236075
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:36:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:36:02 +0000   Sat, 08 Nov 2025 10:33:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:36:02 +0000   Sat, 08 Nov 2025 10:33:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:36:02 +0000   Sat, 08 Nov 2025 10:33:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:36:02 +0000   Sat, 08 Nov 2025 10:34:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-236075
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                70b29cae-e7bf-4dbe-8a30-22731e1a459a
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-x99cj                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-236075                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-7jcpv                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-236075             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-236075    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-rtchk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-default-k8s-diff-port-236075             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-n7tbc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9bgcn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m18s              kube-proxy       
	  Normal   Starting                 57s                kube-proxy       
	  Normal   Starting                 2m25s              kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m24s              kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m24s              kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m24s              kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m21s              node-controller  Node default-k8s-diff-port-236075 event: Registered Node default-k8s-diff-port-236075 in Controller
	  Normal   NodeReady                98s                kubelet          Node default-k8s-diff-port-236075 status is now: NodeReady
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node default-k8s-diff-port-236075 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-236075 event: Registered Node default-k8s-diff-port-236075 in Controller
	
	
	==> dmesg <==
	[Nov 8 10:15] overlayfs: idmapped layers are currently not supported
	[ +18.424643] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fa7185ae3ba9637256692faca55ed64deec71e9effbe9eebdae3f3c26cca6005] <==
	{"level":"warn","ts":"2025-11-08T10:35:19.337689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.369463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.379920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.408975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.422006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.455376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.474583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.503572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.527003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.558223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.587547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.603479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.668641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.680734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.731158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.739442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.764626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.793319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.815177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.854737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.883586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.912315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.943017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:19.985082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:35:20.086260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57494","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:21 up  9:18,  0 user,  load average: 4.12, 3.71, 3.04
	Linux default-k8s-diff-port-236075 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3f4eafd65d1d0509a5aa57695cc1c4d02ae484f6de117480550722edeb2c155e] <==
	I1108 10:35:23.611365       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:35:23.611614       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:35:23.611871       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:35:23.611917       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:35:23.611955       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:35:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:35:23.809260       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:35:23.809401       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:35:23.809437       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:35:23.809914       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:35:53.809535       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:35:53.810675       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:35:53.810743       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 10:35:53.810770       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1108 10:35:55.210326       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:35:55.210358       1 metrics.go:72] Registering metrics
	I1108 10:35:55.210420       1 controller.go:711] "Syncing nftables rules"
	I1108 10:36:03.809153       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:36:03.809197       1 main.go:301] handling current node
	I1108 10:36:13.809066       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:36:13.809179       1 main.go:301] handling current node
	
	
	==> kube-apiserver [acec2edc4de9822c06eae3e3c3a9f215ef4f521d8d4f7376ca41845506b657b4] <==
	I1108 10:35:21.422264       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:35:21.444193       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:35:21.449250       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:35:21.454844       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 10:35:21.455782       1 aggregator.go:171] initial CRD sync complete...
	I1108 10:35:21.455805       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:35:21.455812       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:35:21.455820       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:35:21.456212       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:35:21.456649       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:35:21.471511       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:35:21.474000       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:35:21.479439       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:35:21.479649       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:35:21.855903       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:35:22.070989       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:35:22.248992       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:35:22.431687       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:35:22.483889       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:35:22.616315       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.130.31"}
	I1108 10:35:22.633794       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.162.38"}
	I1108 10:35:24.410740       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:35:24.807170       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 10:35:24.857514       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:35:24.906394       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [01e006bfc6ddabc4f5b52b75d55b814f77b7715ec181a90987b6959c64dc9976] <==
	I1108 10:35:24.405743       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-236075"
	I1108 10:35:24.405806       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:35:24.407178       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 10:35:24.410226       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:35:24.412491       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:35:24.413740       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:35:24.431485       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:35:24.440418       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:35:24.441201       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:35:24.442779       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:35:24.445083       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:35:24.445102       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:35:24.445110       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:35:24.447131       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 10:35:24.447292       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:35:24.450762       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:35:24.451291       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:35:24.451384       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:35:24.451556       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:35:24.452387       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:35:24.452527       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 10:35:24.452594       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:35:24.452550       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:35:24.459516       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:35:24.465631       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [055c9437ada6c108a9ef6e524d0a66bf1dfcc081baabb70652559e4f149edd8d] <==
	I1108 10:35:23.380742       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:35:23.506180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:35:23.606498       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:35:23.606531       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:35:23.606600       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:35:23.727330       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:35:23.727396       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:35:23.731463       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:35:23.731962       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:35:23.732075       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:35:23.734770       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:35:23.734881       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:35:23.735221       1 config.go:200] "Starting service config controller"
	I1108 10:35:23.735268       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:35:23.735590       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:35:23.736547       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:35:23.737294       1 config.go:309] "Starting node config controller"
	I1108 10:35:23.737311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:35:23.737319       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:35:23.835888       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:35:23.835945       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 10:35:23.837457       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7e2e28dd3fc4c2eca9405df29e70031d910548f4d6fcf55d46048b375ddadca6] <==
	I1108 10:35:20.399916       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:35:22.296368       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:35:22.296398       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:35:22.311055       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:35:22.311258       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:35:22.311335       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:35:22.311399       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:35:22.312798       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:35:22.332084       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:35:22.312978       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:35:22.332733       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:35:22.412603       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:35:22.432545       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:35:22.432804       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:35:25 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:25.264171     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp9ml\" (UniqueName: \"kubernetes.io/projected/f5bee521-26ae-49f4-8fa3-942ca67f02d4-kube-api-access-zp9ml\") pod \"dashboard-metrics-scraper-6ffb444bf9-n7tbc\" (UID: \"f5bee521-26ae-49f4-8fa3-942ca67f02d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc"
	Nov 08 10:35:25 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:25.264811     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/24830468-2da1-4071-a4ca-9add3a940f75-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9bgcn\" (UID: \"24830468-2da1-4071-a4ca-9add3a940f75\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9bgcn"
	Nov 08 10:35:25 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:25.264887     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f5bee521-26ae-49f4-8fa3-942ca67f02d4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-n7tbc\" (UID: \"f5bee521-26ae-49f4-8fa3-942ca67f02d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc"
	Nov 08 10:35:25 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:25.264920     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmn7b\" (UniqueName: \"kubernetes.io/projected/24830468-2da1-4071-a4ca-9add3a940f75-kube-api-access-jmn7b\") pod \"kubernetes-dashboard-855c9754f9-9bgcn\" (UID: \"24830468-2da1-4071-a4ca-9add3a940f75\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9bgcn"
	Nov 08 10:35:25 default-k8s-diff-port-236075 kubelet[778]: W1108 10:35:25.526698     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/764db5e58d40d13d94e85b44e9bfb7f10d40e6fcd2c819b354d68b951a1a7edf/crio-1adb0fb1cb27ae208f0fef6069f3ecf85ea9af7c7f32d5ba48cb74e91d5a425f WatchSource:0}: Error finding container 1adb0fb1cb27ae208f0fef6069f3ecf85ea9af7c7f32d5ba48cb74e91d5a425f: Status 404 returned error can't find the container with id 1adb0fb1cb27ae208f0fef6069f3ecf85ea9af7c7f32d5ba48cb74e91d5a425f
	Nov 08 10:35:30 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:30.477880     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 10:35:32 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:32.530561     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9bgcn" podStartSLOduration=3.446937956 podStartE2EDuration="7.530543318s" podCreationTimestamp="2025-11-08 10:35:25 +0000 UTC" firstStartedPulling="2025-11-08 10:35:25.500348128 +0000 UTC m=+9.347475645" lastFinishedPulling="2025-11-08 10:35:29.583953415 +0000 UTC m=+13.431081007" observedRunningTime="2025-11-08 10:35:30.519667564 +0000 UTC m=+14.366795090" watchObservedRunningTime="2025-11-08 10:35:32.530543318 +0000 UTC m=+16.377670844"
	Nov 08 10:35:34 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:34.519160     778 scope.go:117] "RemoveContainer" containerID="bf32856aa58731d3493493a604cf603921c42150a50ee021f1215be12a4bfda8"
	Nov 08 10:35:35 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:35.522919     778 scope.go:117] "RemoveContainer" containerID="bf32856aa58731d3493493a604cf603921c42150a50ee021f1215be12a4bfda8"
	Nov 08 10:35:35 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:35.523197     778 scope.go:117] "RemoveContainer" containerID="290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b"
	Nov 08 10:35:35 default-k8s-diff-port-236075 kubelet[778]: E1108 10:35:35.523343     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n7tbc_kubernetes-dashboard(f5bee521-26ae-49f4-8fa3-942ca67f02d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc" podUID="f5bee521-26ae-49f4-8fa3-942ca67f02d4"
	Nov 08 10:35:36 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:36.527161     778 scope.go:117] "RemoveContainer" containerID="290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b"
	Nov 08 10:35:36 default-k8s-diff-port-236075 kubelet[778]: E1108 10:35:36.527329     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n7tbc_kubernetes-dashboard(f5bee521-26ae-49f4-8fa3-942ca67f02d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc" podUID="f5bee521-26ae-49f4-8fa3-942ca67f02d4"
	Nov 08 10:35:40 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:40.875090     778 scope.go:117] "RemoveContainer" containerID="290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b"
	Nov 08 10:35:40 default-k8s-diff-port-236075 kubelet[778]: E1108 10:35:40.875285     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n7tbc_kubernetes-dashboard(f5bee521-26ae-49f4-8fa3-942ca67f02d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc" podUID="f5bee521-26ae-49f4-8fa3-942ca67f02d4"
	Nov 08 10:35:53 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:53.339236     778 scope.go:117] "RemoveContainer" containerID="290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b"
	Nov 08 10:35:53 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:53.571639     778 scope.go:117] "RemoveContainer" containerID="d156164180806a75e51f45a02fba01ad1a09a5d84bc02c3049c5b2256db77b0e"
	Nov 08 10:35:53 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:53.582136     778 scope.go:117] "RemoveContainer" containerID="290ec11aa3a01bd45dbb2f903fac55ecbedbf79ddfd0fe72d51b1d757fb3eb6b"
	Nov 08 10:35:53 default-k8s-diff-port-236075 kubelet[778]: I1108 10:35:53.582537     778 scope.go:117] "RemoveContainer" containerID="9248499b7cf3dde2e3a3d480cca7fb372cdc9053f05a33387e99151065e29b36"
	Nov 08 10:35:53 default-k8s-diff-port-236075 kubelet[778]: E1108 10:35:53.583711     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n7tbc_kubernetes-dashboard(f5bee521-26ae-49f4-8fa3-942ca67f02d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc" podUID="f5bee521-26ae-49f4-8fa3-942ca67f02d4"
	Nov 08 10:36:00 default-k8s-diff-port-236075 kubelet[778]: I1108 10:36:00.875189     778 scope.go:117] "RemoveContainer" containerID="9248499b7cf3dde2e3a3d480cca7fb372cdc9053f05a33387e99151065e29b36"
	Nov 08 10:36:00 default-k8s-diff-port-236075 kubelet[778]: E1108 10:36:00.875368     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n7tbc_kubernetes-dashboard(f5bee521-26ae-49f4-8fa3-942ca67f02d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n7tbc" podUID="f5bee521-26ae-49f4-8fa3-942ca67f02d4"
	Nov 08 10:36:14 default-k8s-diff-port-236075 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:36:14 default-k8s-diff-port-236075 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:36:14 default-k8s-diff-port-236075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0fddc1f75d93c56972bc3e7f7b7bfa6c0c0e4208c9f791848d50f9dd3ddbeda3] <==
	2025/11/08 10:35:29 Using namespace: kubernetes-dashboard
	2025/11/08 10:35:29 Using in-cluster config to connect to apiserver
	2025/11/08 10:35:29 Using secret token for csrf signing
	2025/11/08 10:35:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:35:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:35:29 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:35:29 Generating JWE encryption key
	2025/11/08 10:35:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:35:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:35:30 Initializing JWE encryption key from synchronized object
	2025/11/08 10:35:30 Creating in-cluster Sidecar client
	2025/11/08 10:35:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:35:30 Serving insecurely on HTTP port: 9090
	2025/11/08 10:36:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:35:29 Starting overwatch
	
	
	==> storage-provisioner [13e1625e444fc357bab28f8b30257f63116424e84a77d8a8e1251a97e1e2f759] <==
	I1108 10:35:53.657717       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:35:53.657838       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:35:53.661167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:35:57.117653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:01.377890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:04.976196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:08.029401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:11.052118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:11.058147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:36:11.058372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:36:11.058583       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-236075_31f9469e-c19d-4f5b-bded-69b9e4adc434!
	I1108 10:36:11.061870       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee54a8f0-7b96-489a-b394-63ad7711ea02", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-236075_31f9469e-c19d-4f5b-bded-69b9e4adc434 became leader
	W1108 10:36:11.079818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:11.088241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:36:11.161524       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-236075_31f9469e-c19d-4f5b-bded-69b9e4adc434!
	W1108 10:36:13.091257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:13.096234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:15.100307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:15.107423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:17.111021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:17.117656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:19.121509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:19.135940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:21.139510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:21.151550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d156164180806a75e51f45a02fba01ad1a09a5d84bc02c3049c5b2256db77b0e] <==
	I1108 10:35:23.301627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:35:53.303842       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075: exit status 2 (498.044231ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-236075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (9.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-790346 --alsologtostderr -v=1
E1108 10:37:06.031747 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-790346 --alsologtostderr -v=1: exit status 80 (2.755769101s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-790346 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:37:05.940618 1229107 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:37:05.940825 1229107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:37:05.940838 1229107 out.go:374] Setting ErrFile to fd 2...
	I1108 10:37:05.940843 1229107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:37:05.941110 1229107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:37:05.941385 1229107 out.go:368] Setting JSON to false
	I1108 10:37:05.941478 1229107 mustload.go:66] Loading cluster: embed-certs-790346
	I1108 10:37:05.941955 1229107 config.go:182] Loaded profile config "embed-certs-790346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:37:05.942469 1229107 cli_runner.go:164] Run: docker container inspect embed-certs-790346 --format={{.State.Status}}
	I1108 10:37:05.968415 1229107 host.go:66] Checking if "embed-certs-790346" exists ...
	I1108 10:37:05.968758 1229107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:37:06.109214 1229107 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:37:06.093044683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:37:06.109862 1229107 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-790346 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:37:06.119905 1229107 out.go:179] * Pausing node embed-certs-790346 ... 
	I1108 10:37:06.125648 1229107 host.go:66] Checking if "embed-certs-790346" exists ...
	I1108 10:37:06.126038 1229107 ssh_runner.go:195] Run: systemctl --version
	I1108 10:37:06.126190 1229107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790346
	I1108 10:37:06.155052 1229107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/embed-certs-790346/id_rsa Username:docker}
	I1108 10:37:06.288553 1229107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:37:06.330125 1229107 pause.go:52] kubelet running: true
	I1108 10:37:06.330282 1229107 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:37:06.784354 1229107 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:37:06.784523 1229107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:37:06.927232 1229107 cri.go:89] found id: "dd8fae91fcb9db2903395cab846cc54cb65d2787b94c2a4c392f8fced9aedf0d"
	I1108 10:37:06.927308 1229107 cri.go:89] found id: "1c61611abd0236cf3edd7cd7cbcb13d39c0e46247750268e93e4590ecd739144"
	I1108 10:37:06.927331 1229107 cri.go:89] found id: "99642b383fc0dec33ad2ce8f0c4a4ffe1b697e862a2493a131dd3ed36626da5e"
	I1108 10:37:06.927348 1229107 cri.go:89] found id: "b25262d04ac63153c5449a4717cac831ae0adffd457e2b4ff0b7e0902f0792e0"
	I1108 10:37:06.927385 1229107 cri.go:89] found id: "bb49cec67a688ecce92db1dcf1da23dc04e0dab933a76690980178360b633df1"
	I1108 10:37:06.927408 1229107 cri.go:89] found id: "e9b1d9f7c0483027ded3f21b252ced6d355c5f322e03b235441f038ad56cee88"
	I1108 10:37:06.927427 1229107 cri.go:89] found id: "86097d71b8a6e43eb320fe2cd739591210e92690c38263951b284aa8c7ee0039"
	I1108 10:37:06.927450 1229107 cri.go:89] found id: "ea89ad8d0eb688f083aeb7d472a94d7a3f3b2063341d0ca898c464ca703d3501"
	I1108 10:37:06.927487 1229107 cri.go:89] found id: "2edd058c6ccdbae4d8675a306904465a1fe93113e0e01793a923f585b98be4d2"
	I1108 10:37:06.927511 1229107 cri.go:89] found id: "26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512"
	I1108 10:37:06.927548 1229107 cri.go:89] found id: "18faf513aa0bb244af506a701456ddf2f5242f9fb0fceca3afc5ff31f5ff8f5e"
	I1108 10:37:06.927571 1229107 cri.go:89] found id: ""
	I1108 10:37:06.927656 1229107 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:37:06.942651 1229107 retry.go:31] will retry after 291.127171ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:37:06Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:37:07.234031 1229107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:37:07.252288 1229107 pause.go:52] kubelet running: false
	I1108 10:37:07.252378 1229107 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:37:07.576506 1229107 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:37:07.576651 1229107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:37:07.718616 1229107 cri.go:89] found id: "dd8fae91fcb9db2903395cab846cc54cb65d2787b94c2a4c392f8fced9aedf0d"
	I1108 10:37:07.718692 1229107 cri.go:89] found id: "1c61611abd0236cf3edd7cd7cbcb13d39c0e46247750268e93e4590ecd739144"
	I1108 10:37:07.718713 1229107 cri.go:89] found id: "99642b383fc0dec33ad2ce8f0c4a4ffe1b697e862a2493a131dd3ed36626da5e"
	I1108 10:37:07.718733 1229107 cri.go:89] found id: "b25262d04ac63153c5449a4717cac831ae0adffd457e2b4ff0b7e0902f0792e0"
	I1108 10:37:07.718771 1229107 cri.go:89] found id: "bb49cec67a688ecce92db1dcf1da23dc04e0dab933a76690980178360b633df1"
	I1108 10:37:07.718794 1229107 cri.go:89] found id: "e9b1d9f7c0483027ded3f21b252ced6d355c5f322e03b235441f038ad56cee88"
	I1108 10:37:07.718814 1229107 cri.go:89] found id: "86097d71b8a6e43eb320fe2cd739591210e92690c38263951b284aa8c7ee0039"
	I1108 10:37:07.718850 1229107 cri.go:89] found id: "ea89ad8d0eb688f083aeb7d472a94d7a3f3b2063341d0ca898c464ca703d3501"
	I1108 10:37:07.718872 1229107 cri.go:89] found id: "2edd058c6ccdbae4d8675a306904465a1fe93113e0e01793a923f585b98be4d2"
	I1108 10:37:07.718896 1229107 cri.go:89] found id: "26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512"
	I1108 10:37:07.718933 1229107 cri.go:89] found id: "18faf513aa0bb244af506a701456ddf2f5242f9fb0fceca3afc5ff31f5ff8f5e"
	I1108 10:37:07.718955 1229107 cri.go:89] found id: ""
	I1108 10:37:07.719041 1229107 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:37:07.739476 1229107 retry.go:31] will retry after 304.855114ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:37:07Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:37:08.045107 1229107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:37:08.073531 1229107 pause.go:52] kubelet running: false
	I1108 10:37:08.073662 1229107 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:37:08.380531 1229107 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:37:08.380641 1229107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:37:08.539173 1229107 cri.go:89] found id: "dd8fae91fcb9db2903395cab846cc54cb65d2787b94c2a4c392f8fced9aedf0d"
	I1108 10:37:08.539237 1229107 cri.go:89] found id: "1c61611abd0236cf3edd7cd7cbcb13d39c0e46247750268e93e4590ecd739144"
	I1108 10:37:08.539265 1229107 cri.go:89] found id: "99642b383fc0dec33ad2ce8f0c4a4ffe1b697e862a2493a131dd3ed36626da5e"
	I1108 10:37:08.539287 1229107 cri.go:89] found id: "b25262d04ac63153c5449a4717cac831ae0adffd457e2b4ff0b7e0902f0792e0"
	I1108 10:37:08.539321 1229107 cri.go:89] found id: "bb49cec67a688ecce92db1dcf1da23dc04e0dab933a76690980178360b633df1"
	I1108 10:37:08.539343 1229107 cri.go:89] found id: "e9b1d9f7c0483027ded3f21b252ced6d355c5f322e03b235441f038ad56cee88"
	I1108 10:37:08.539363 1229107 cri.go:89] found id: "86097d71b8a6e43eb320fe2cd739591210e92690c38263951b284aa8c7ee0039"
	I1108 10:37:08.539383 1229107 cri.go:89] found id: "ea89ad8d0eb688f083aeb7d472a94d7a3f3b2063341d0ca898c464ca703d3501"
	I1108 10:37:08.539418 1229107 cri.go:89] found id: "2edd058c6ccdbae4d8675a306904465a1fe93113e0e01793a923f585b98be4d2"
	I1108 10:37:08.539440 1229107 cri.go:89] found id: "26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512"
	I1108 10:37:08.539461 1229107 cri.go:89] found id: "18faf513aa0bb244af506a701456ddf2f5242f9fb0fceca3afc5ff31f5ff8f5e"
	I1108 10:37:08.539491 1229107 cri.go:89] found id: ""
	I1108 10:37:08.539583 1229107 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:37:08.565778 1229107 out.go:203] 
	W1108 10:37:08.569227 1229107 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:37:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:37:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:37:08.569400 1229107 out.go:285] * 
	* 
	W1108 10:37:08.578755 1229107 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:37:08.584041 1229107 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-790346 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-790346
helpers_test.go:243: (dbg) docker inspect embed-certs-790346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7",
	        "Created": "2025-11-08T10:34:14.160209579Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1222886,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:35:58.171747853Z",
	            "FinishedAt": "2025-11-08T10:35:57.352811293Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/hostname",
	        "HostsPath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/hosts",
	        "LogPath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7-json.log",
	        "Name": "/embed-certs-790346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-790346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-790346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7",
	                "LowerDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-790346",
	                "Source": "/var/lib/docker/volumes/embed-certs-790346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-790346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-790346",
	                "name.minikube.sigs.k8s.io": "embed-certs-790346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "990617da7afb303e4cf8c211732d106eeb42ef18848e326919dde8831cc39856",
	            "SandboxKey": "/var/run/docker/netns/990617da7afb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34532"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34533"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34536"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34534"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34535"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-790346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:fd:7a:a1:60:06",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d495b48ffde5b28a4ff62dc6240c1429227e085b124c5835b7607c15b8bf3dd5",
	                    "EndpointID": "43d90fa0a03078c26c56ed8c6be4c86ef1a8f22fc238b84f15421feeb8a3e062",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-790346",
	                        "c42811f48049"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790346 -n embed-certs-790346
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790346 -n embed-certs-790346: exit status 2 (588.08213ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-790346 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-790346 logs -n 25: (2.110707955s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ image   │ old-k8s-version-171136 image list --format=json                                                                                                                                                                                               │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-171136 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-837698                                                                                                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-236075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-236075 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-236075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-790346 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-790346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ image   │ default-k8s-diff-port-236075 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ pause   │ -p default-k8s-diff-port-236075 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-553553                                                                                                                                                                                                               │ disable-driver-mounts-553553 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	│ image   │ embed-certs-790346 image list --format=json                                                                                                                                                                                                   │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-790346 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:36:25
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:36:25.941677 1226201 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:36:25.941777 1226201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:36:25.941782 1226201 out.go:374] Setting ErrFile to fd 2...
	I1108 10:36:25.941834 1226201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:36:25.942086 1226201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:36:25.942486 1226201 out.go:368] Setting JSON to false
	I1108 10:36:25.944473 1226201 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33531,"bootTime":1762564655,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:36:25.944541 1226201 start.go:143] virtualization:  
	I1108 10:36:25.948664 1226201 out.go:179] * [no-preload-291044] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:36:25.953213 1226201 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:36:25.953267 1226201 notify.go:221] Checking for updates...
	I1108 10:36:25.960527 1226201 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:36:25.963771 1226201 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:36:25.967203 1226201 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:36:25.970527 1226201 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:36:25.973881 1226201 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:36:25.977660 1226201 config.go:182] Loaded profile config "embed-certs-790346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:36:25.977806 1226201 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:36:26.020311 1226201 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:36:26.020510 1226201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:36:26.149774 1226201 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:36:26.138261978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:36:26.149898 1226201 docker.go:319] overlay module found
	I1108 10:36:26.155544 1226201 out.go:179] * Using the docker driver based on user configuration
	I1108 10:36:26.159200 1226201 start.go:309] selected driver: docker
	I1108 10:36:26.159222 1226201 start.go:930] validating driver "docker" against <nil>
	I1108 10:36:26.159252 1226201 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:36:26.159983 1226201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:36:26.286276 1226201 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:36:26.275188875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:36:26.286431 1226201 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:36:26.286663 1226201 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:36:26.291168 1226201 out.go:179] * Using Docker driver with root privileges
	I1108 10:36:26.294493 1226201 cni.go:84] Creating CNI manager for ""
	I1108 10:36:26.294561 1226201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:36:26.294571 1226201 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:36:26.294651 1226201 start.go:353] cluster config:
	{Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:36:26.298100 1226201 out.go:179] * Starting "no-preload-291044" primary control-plane node in "no-preload-291044" cluster
	I1108 10:36:26.301403 1226201 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:36:26.304723 1226201 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:36:26.307996 1226201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:36:26.308000 1226201 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:36:26.308146 1226201 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/config.json ...
	I1108 10:36:26.308177 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/config.json: {Name:mk712d9c640d8e5ee04268d7bb1adec91ec48f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:26.308383 1226201 cache.go:107] acquiring lock: {Name:mk8513c6159258582048bf022eb3626495f0ef99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.308479 1226201 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 10:36:26.308494 1226201 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 119.783µs
	I1108 10:36:26.308503 1226201 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 10:36:26.308524 1226201 cache.go:107] acquiring lock: {Name:mkfbe116f289c09e7f023243a3e334812266f562 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.308620 1226201 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:26.308814 1226201 cache.go:107] acquiring lock: {Name:mkab778ec210a01a148a027551ae4dd6f48ac681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.308898 1226201 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:26.309029 1226201 cache.go:107] acquiring lock: {Name:mk7e5c4997cde36ed0e08a0661a5a5dfada4e032 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.309100 1226201 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:26.309221 1226201 cache.go:107] acquiring lock: {Name:mkc673276c059e1336edcaed46b38c8432a558c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.309285 1226201 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:26.309385 1226201 cache.go:107] acquiring lock: {Name:mk0c87ccf4c259c637cc851ae066ca5ca4e4afa3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.309445 1226201 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1108 10:36:26.309540 1226201 cache.go:107] acquiring lock: {Name:mkde9e8ad2f329aff2c9e641a9eec6a25ba01057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.309604 1226201 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:26.309694 1226201 cache.go:107] acquiring lock: {Name:mkfd6f0a7827507a867318ffa03b1f88753d73c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.309760 1226201 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:26.311074 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:26.311539 1226201 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:26.311730 1226201 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:26.312014 1226201 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1108 10:36:26.312192 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:26.312360 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:26.312560 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:26.338404 1226201 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:36:26.338430 1226201 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:36:26.338444 1226201 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:36:26.338467 1226201 start.go:360] acquireMachinesLock for no-preload-291044: {Name:mkddf61b3e3a9309635e3814dcc2626dcf0ac06a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.338561 1226201 start.go:364] duration metric: took 75.189µs to acquireMachinesLock for "no-preload-291044"
	I1108 10:36:26.338590 1226201 start.go:93] Provisioning new machine with config: &{Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:36:26.338667 1226201 start.go:125] createHost starting for "" (driver="docker")
	W1108 10:36:23.856615 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:25.859049 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:27.863049 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:26.344847 1226201 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:36:26.345095 1226201 start.go:159] libmachine.API.Create for "no-preload-291044" (driver="docker")
	I1108 10:36:26.345132 1226201 client.go:173] LocalClient.Create starting
	I1108 10:36:26.345197 1226201 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem
	I1108 10:36:26.345231 1226201 main.go:143] libmachine: Decoding PEM data...
	I1108 10:36:26.345244 1226201 main.go:143] libmachine: Parsing certificate...
	I1108 10:36:26.345287 1226201 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem
	I1108 10:36:26.345304 1226201 main.go:143] libmachine: Decoding PEM data...
	I1108 10:36:26.345313 1226201 main.go:143] libmachine: Parsing certificate...
	I1108 10:36:26.345657 1226201 cli_runner.go:164] Run: docker network inspect no-preload-291044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:36:26.383116 1226201 cli_runner.go:211] docker network inspect no-preload-291044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:36:26.383201 1226201 network_create.go:284] running [docker network inspect no-preload-291044] to gather additional debugging logs...
	I1108 10:36:26.383216 1226201 cli_runner.go:164] Run: docker network inspect no-preload-291044
	W1108 10:36:26.413064 1226201 cli_runner.go:211] docker network inspect no-preload-291044 returned with exit code 1
	I1108 10:36:26.413098 1226201 network_create.go:287] error running [docker network inspect no-preload-291044]: docker network inspect no-preload-291044: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-291044 not found
	I1108 10:36:26.413110 1226201 network_create.go:289] output of [docker network inspect no-preload-291044]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-291044 not found
	
	** /stderr **
	I1108 10:36:26.413196 1226201 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:36:26.441097 1226201 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f127b1978c3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:c7:37:65:8c:96} reservation:<nil>}
	I1108 10:36:26.441417 1226201 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b98bf73d2e94 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:99:be:46:ea:86} reservation:<nil>}
	I1108 10:36:26.441826 1226201 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c4df73992be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:ad:c1:c0:ea:6d} reservation:<nil>}
	I1108 10:36:26.442077 1226201 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d495b48ffde5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:ac:97:fe:92:64} reservation:<nil>}
	I1108 10:36:26.443253 1226201 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019474d0}
	I1108 10:36:26.443328 1226201 network_create.go:124] attempt to create docker network no-preload-291044 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 10:36:26.443420 1226201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-291044 no-preload-291044
	I1108 10:36:26.536794 1226201 network_create.go:108] docker network no-preload-291044 192.168.85.0/24 created
	I1108 10:36:26.536879 1226201 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-291044" container
	I1108 10:36:26.536993 1226201 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:36:26.561011 1226201 cli_runner.go:164] Run: docker volume create no-preload-291044 --label name.minikube.sigs.k8s.io=no-preload-291044 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:36:26.589777 1226201 oci.go:103] Successfully created a docker volume no-preload-291044
	I1108 10:36:26.589857 1226201 cli_runner.go:164] Run: docker run --rm --name no-preload-291044-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-291044 --entrypoint /usr/bin/test -v no-preload-291044:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:36:26.639461 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1108 10:36:26.662227 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1108 10:36:26.680739 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1108 10:36:26.683418 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1108 10:36:26.686954 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1108 10:36:26.705368 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1108 10:36:26.705393 1226201 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 396.008663ms
	I1108 10:36:26.705405 1226201 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1108 10:36:26.711393 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1108 10:36:26.744137 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1108 10:36:27.079158 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1108 10:36:27.079187 1226201 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 769.96591ms
	I1108 10:36:27.079200 1226201 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1108 10:36:27.639645 1226201 cli_runner.go:217] Completed: docker run --rm --name no-preload-291044-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-291044 --entrypoint /usr/bin/test -v no-preload-291044:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (1.049749079s)
	I1108 10:36:27.640519 1226201 oci.go:107] Successfully prepared a docker volume no-preload-291044
	I1108 10:36:27.640571 1226201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1108 10:36:27.640715 1226201 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:36:27.641292 1226201 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:36:27.756693 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1108 10:36:27.756727 1226201 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.447700896s
	I1108 10:36:27.756741 1226201 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1108 10:36:27.757763 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1108 10:36:27.757800 1226201 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.44810808s
	I1108 10:36:27.757812 1226201 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1108 10:36:27.847682 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1108 10:36:27.847711 1226201 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.538899672s
	I1108 10:36:27.847724 1226201 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1108 10:36:27.885535 1226201 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-291044 --name no-preload-291044 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-291044 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-291044 --network no-preload-291044 --ip 192.168.85.2 --volume no-preload-291044:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:36:28.002974 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1108 10:36:28.003516 1226201 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.694984206s
	I1108 10:36:28.003534 1226201 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1108 10:36:28.528908 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Running}}
	I1108 10:36:28.571303 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:36:28.603948 1226201 cli_runner.go:164] Run: docker exec no-preload-291044 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:36:28.696764 1226201 oci.go:144] the created container "no-preload-291044" has a running status.
	I1108 10:36:28.696800 1226201 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa...
	I1108 10:36:28.950439 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1108 10:36:28.950525 1226201 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.640984663s
	I1108 10:36:28.950554 1226201 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1108 10:36:28.953578 1226201 cache.go:87] Successfully saved all images to host disk.
	I1108 10:36:29.137687 1226201 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:36:29.162377 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:36:29.189754 1226201 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:36:29.189774 1226201 kic_runner.go:114] Args: [docker exec --privileged no-preload-291044 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:36:29.255610 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:36:29.275785 1226201 machine.go:94] provisionDockerMachine start ...
	I1108 10:36:29.275909 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:29.298213 1226201 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:29.298569 1226201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1108 10:36:29.298580 1226201 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:36:29.299335 1226201 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54184->127.0.0.1:34537: read: connection reset by peer
	W1108 10:36:30.353687 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:32.354549 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:32.464112 1226201 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-291044
	
	I1108 10:36:32.464137 1226201 ubuntu.go:182] provisioning hostname "no-preload-291044"
	I1108 10:36:32.464203 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:32.482229 1226201 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:32.482543 1226201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1108 10:36:32.482561 1226201 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-291044 && echo "no-preload-291044" | sudo tee /etc/hostname
	I1108 10:36:32.642542 1226201 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-291044
	
	I1108 10:36:32.642626 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:32.660721 1226201 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:32.661051 1226201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1108 10:36:32.661075 1226201 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-291044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-291044/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-291044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:36:32.812571 1226201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:36:32.812598 1226201 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:36:32.812623 1226201 ubuntu.go:190] setting up certificates
	I1108 10:36:32.812632 1226201 provision.go:84] configureAuth start
	I1108 10:36:32.812693 1226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:36:32.835184 1226201 provision.go:143] copyHostCerts
	I1108 10:36:32.835262 1226201 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:36:32.835276 1226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:36:32.835360 1226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:36:32.835469 1226201 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:36:32.835480 1226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:36:32.835510 1226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:36:32.835569 1226201 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:36:32.835578 1226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:36:32.835605 1226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:36:32.835656 1226201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.no-preload-291044 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-291044]
	I1108 10:36:33.257005 1226201 provision.go:177] copyRemoteCerts
	I1108 10:36:33.257073 1226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:36:33.257124 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:33.274760 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:36:33.381132 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:36:33.400553 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:36:33.422403 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:36:33.443177 1226201 provision.go:87] duration metric: took 630.530919ms to configureAuth
	I1108 10:36:33.443206 1226201 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:36:33.443433 1226201 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:36:33.443550 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:33.461150 1226201 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:33.461455 1226201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1108 10:36:33.461478 1226201 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:36:33.810488 1226201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:36:33.810511 1226201 machine.go:97] duration metric: took 4.534685014s to provisionDockerMachine
	I1108 10:36:33.810523 1226201 client.go:176] duration metric: took 7.465384358s to LocalClient.Create
	I1108 10:36:33.810537 1226201 start.go:167] duration metric: took 7.465444935s to libmachine.API.Create "no-preload-291044"
	I1108 10:36:33.810549 1226201 start.go:293] postStartSetup for "no-preload-291044" (driver="docker")
	I1108 10:36:33.810562 1226201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:36:33.810630 1226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:36:33.810675 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:33.827801 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:36:33.933052 1226201 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:36:33.936389 1226201 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:36:33.936419 1226201 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:36:33.936429 1226201 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:36:33.936516 1226201 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:36:33.936598 1226201 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:36:33.936704 1226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:36:33.944363 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:36:33.963188 1226201 start.go:296] duration metric: took 152.622906ms for postStartSetup
	I1108 10:36:33.963600 1226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:36:33.980317 1226201 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/config.json ...
	I1108 10:36:33.980649 1226201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:36:33.980703 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:33.998230 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:36:34.101642 1226201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:36:34.106493 1226201 start.go:128] duration metric: took 7.767808628s to createHost
	I1108 10:36:34.106518 1226201 start.go:83] releasing machines lock for "no-preload-291044", held for 7.767946297s
	I1108 10:36:34.106595 1226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:36:34.124085 1226201 ssh_runner.go:195] Run: cat /version.json
	I1108 10:36:34.124145 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:34.124380 1226201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:36:34.124464 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:34.146152 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:36:34.148293 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:36:34.252180 1226201 ssh_runner.go:195] Run: systemctl --version
	I1108 10:36:34.352014 1226201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:36:34.394062 1226201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:36:34.398799 1226201 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:36:34.398870 1226201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:36:34.430534 1226201 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:36:34.430561 1226201 start.go:496] detecting cgroup driver to use...
	I1108 10:36:34.430593 1226201 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:36:34.430662 1226201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:36:34.449455 1226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:36:34.462319 1226201 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:36:34.462382 1226201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:36:34.480246 1226201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:36:34.499094 1226201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:36:34.630200 1226201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:36:34.759993 1226201 docker.go:234] disabling docker service ...
	I1108 10:36:34.760113 1226201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:36:34.784931 1226201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:36:34.799843 1226201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:36:34.938905 1226201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:36:35.076128 1226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:36:35.089937 1226201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:36:35.107302 1226201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:36:35.107372 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.116543 1226201 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:36:35.116629 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.125982 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.134833 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.144005 1226201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:36:35.152281 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.160757 1226201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.174964 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.184740 1226201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:36:35.192280 1226201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:36:35.199747 1226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:36:35.315693 1226201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:36:35.453083 1226201 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:36:35.453161 1226201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:36:35.457394 1226201 start.go:564] Will wait 60s for crictl version
	I1108 10:36:35.457458 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:35.461178 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:36:35.490376 1226201 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:36:35.490481 1226201 ssh_runner.go:195] Run: crio --version
	I1108 10:36:35.520793 1226201 ssh_runner.go:195] Run: crio --version
	I1108 10:36:35.561796 1226201 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:36:35.564577 1226201 cli_runner.go:164] Run: docker network inspect no-preload-291044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:36:35.580749 1226201 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:36:35.587785 1226201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:36:35.597691 1226201 kubeadm.go:884] updating cluster {Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:36:35.597805 1226201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:36:35.597851 1226201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:36:35.621144 1226201 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1108 10:36:35.621170 1226201 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 10:36:35.621205 1226201 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:35.621397 1226201 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:35.621487 1226201 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:35.621567 1226201 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:35.621653 1226201 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:35.621737 1226201 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1108 10:36:35.621820 1226201 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:35.621909 1226201 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:35.623257 1226201 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:35.623526 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:35.623693 1226201 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1108 10:36:35.623845 1226201 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:35.623995 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:35.624167 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:35.624322 1226201 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:35.624690 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:35.874518 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:35.897516 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:35.900589 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1108 10:36:35.916481 1226201 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1108 10:36:35.916588 1226201 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:35.916668 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:35.921430 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:35.937577 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	W1108 10:36:34.853457 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:36.855507 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:35.951342 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:35.953422 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:35.977996 1226201 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1108 10:36:35.978048 1226201 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:35.978105 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.034730 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:36.036274 1226201 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1108 10:36:36.036363 1226201 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1108 10:36:36.036474 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.058537 1226201 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1108 10:36:36.058580 1226201 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:36.058632 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.058689 1226201 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1108 10:36:36.058706 1226201 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:36.058746 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.058820 1226201 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1108 10:36:36.058837 1226201 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:36.058857 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.067380 1226201 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1108 10:36:36.067425 1226201 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:36.067486 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.067607 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:36.095615 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:36.095735 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 10:36:36.095828 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:36.095881 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:36.095939 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:36.115765 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:36.115933 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:36.212090 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 10:36:36.212235 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:36.212358 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:36.212512 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:36.212639 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:36.235124 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:36.235279 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:36.313417 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:36.313587 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 10:36:36.313704 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1108 10:36:36.313819 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1108 10:36:36.313944 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:36.314060 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:36.342212 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1108 10:36:36.342341 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 10:36:36.342432 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:36.418494 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1108 10:36:36.418605 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1108 10:36:36.418673 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1108 10:36:36.418721 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 10:36:36.418764 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1108 10:36:36.418827 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1108 10:36:36.418872 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1108 10:36:36.418891 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1108 10:36:36.418931 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1108 10:36:36.418995 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 10:36:36.419043 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1108 10:36:36.419086 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 10:36:36.419128 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1108 10:36:36.419145 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1108 10:36:36.454407 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1108 10:36:36.454645 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1108 10:36:36.454671 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1108 10:36:36.454792 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1108 10:36:36.454810 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1108 10:36:36.454879 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1108 10:36:36.454908 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1108 10:36:36.454987 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1108 10:36:36.455004 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1108 10:36:36.454452 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1108 10:36:36.621252 1226201 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1108 10:36:36.621497 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1108 10:36:37.038118 1226201 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1108 10:36:37.038434 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:37.079792 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1108 10:36:37.079830 1226201 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 10:36:37.079906 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 10:36:37.144752 1226201 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1108 10:36:37.144869 1226201 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:37.144950 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:38.801005 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.721056189s)
	I1108 10:36:38.801036 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1108 10:36:38.801054 1226201 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1108 10:36:38.801079 1226201 ssh_runner.go:235] Completed: which crictl: (1.65605994s)
	I1108 10:36:38.801102 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1108 10:36:38.801170 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:40.614613 1226201 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.813402835s)
	I1108 10:36:40.614690 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:40.614878 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.813747252s)
	I1108 10:36:40.614895 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1108 10:36:40.614920 1226201 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 10:36:40.614964 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 10:36:40.641598 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1108 10:36:39.355099 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:41.356007 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:41.980486 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.365460223s)
	I1108 10:36:41.980512 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1108 10:36:41.980531 1226201 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 10:36:41.980577 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 10:36:41.980656 1226201 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.338996867s)
	I1108 10:36:41.980685 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1108 10:36:41.980746 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1108 10:36:43.291943 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.311336485s)
	I1108 10:36:43.291971 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1108 10:36:43.291990 1226201 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 10:36:43.292038 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 10:36:43.292115 1226201 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.311351113s)
	I1108 10:36:43.292136 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1108 10:36:43.292152 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1108 10:36:44.651187 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.359125215s)
	I1108 10:36:44.651213 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1108 10:36:44.651261 1226201 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1108 10:36:44.651312 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1108 10:36:43.854699 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:46.354318 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:48.421823 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.770481142s)
	I1108 10:36:48.421849 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1108 10:36:48.421867 1226201 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1108 10:36:48.421913 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1108 10:36:48.981158 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1108 10:36:48.981198 1226201 cache_images.go:125] Successfully loaded all cached images
	I1108 10:36:48.981204 1226201 cache_images.go:94] duration metric: took 13.360021521s to LoadCachedImages
	I1108 10:36:48.981215 1226201 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 10:36:48.981326 1226201 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-291044 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:36:48.981408 1226201 ssh_runner.go:195] Run: crio config
	I1108 10:36:49.058278 1226201 cni.go:84] Creating CNI manager for ""
	I1108 10:36:49.058351 1226201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:36:49.058385 1226201 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:36:49.058438 1226201 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-291044 NodeName:no-preload-291044 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:36:49.058615 1226201 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-291044"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:36:49.058739 1226201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:36:49.067397 1226201 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1108 10:36:49.067501 1226201 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1108 10:36:49.075657 1226201 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1108 10:36:49.075917 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1108 10:36:49.076538 1226201 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1108 10:36:49.076686 1226201 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1108 10:36:49.080787 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1108 10:36:49.080826 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1108 10:36:49.780901 1226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:36:49.801064 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1108 10:36:49.806025 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1108 10:36:49.806117 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1108 10:36:49.895424 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1108 10:36:49.904976 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1108 10:36:49.905067 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1108 10:36:50.469814 1226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:36:50.478356 1226201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 10:36:50.492169 1226201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:36:50.506985 1226201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 10:36:50.521491 1226201 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:36:50.525365 1226201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:36:50.535827 1226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:36:50.653600 1226201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:36:50.689572 1226201 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044 for IP: 192.168.85.2
	I1108 10:36:50.689591 1226201 certs.go:195] generating shared ca certs ...
	I1108 10:36:50.689607 1226201 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:50.689741 1226201 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:36:50.689783 1226201 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:36:50.689790 1226201 certs.go:257] generating profile certs ...
	I1108 10:36:50.689845 1226201 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.key
	I1108 10:36:50.689855 1226201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt with IP's: []
	W1108 10:36:48.853130 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:51.353398 1222758 pod_ready.go:94] pod "coredns-66bc5c9577-74xnp" is "Ready"
	I1108 10:36:51.353420 1222758 pod_ready.go:86] duration metric: took 38.50605044s for pod "coredns-66bc5c9577-74xnp" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.356557 1222758 pod_ready.go:83] waiting for pod "etcd-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.361486 1222758 pod_ready.go:94] pod "etcd-embed-certs-790346" is "Ready"
	I1108 10:36:51.361507 1222758 pod_ready.go:86] duration metric: took 4.928624ms for pod "etcd-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.364937 1222758 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.375138 1222758 pod_ready.go:94] pod "kube-apiserver-embed-certs-790346" is "Ready"
	I1108 10:36:51.375160 1222758 pod_ready.go:86] duration metric: took 10.203511ms for pod "kube-apiserver-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.382259 1222758 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.551480 1222758 pod_ready.go:94] pod "kube-controller-manager-embed-certs-790346" is "Ready"
	I1108 10:36:51.551557 1222758 pod_ready.go:86] duration metric: took 169.224325ms for pod "kube-controller-manager-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.751419 1222758 pod_ready.go:83] waiting for pod "kube-proxy-fx79j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:52.151248 1222758 pod_ready.go:94] pod "kube-proxy-fx79j" is "Ready"
	I1108 10:36:52.151329 1222758 pod_ready.go:86] duration metric: took 399.838209ms for pod "kube-proxy-fx79j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:52.351673 1222758 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:52.751743 1222758 pod_ready.go:94] pod "kube-scheduler-embed-certs-790346" is "Ready"
	I1108 10:36:52.751775 1222758 pod_ready.go:86] duration metric: took 400.020833ms for pod "kube-scheduler-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:52.751790 1222758 pod_ready.go:40] duration metric: took 39.911499587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:36:52.818557 1222758 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:36:52.822152 1222758 out.go:179] * Done! kubectl is now configured to use "embed-certs-790346" cluster and "default" namespace by default
	I1108 10:36:51.229616 1226201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt ...
	I1108 10:36:51.229651 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: {Name:mk305b9b1018de0d9ca1d9dedc09b3076d8b16c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:51.229896 1226201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.key ...
	I1108 10:36:51.229914 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.key: {Name:mkac67e6f5d15d6f1b52a8c327f13a3c452117f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:51.230054 1226201 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key.e7c39ab7
	I1108 10:36:51.230076 1226201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt.e7c39ab7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1108 10:36:51.425151 1226201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt.e7c39ab7 ...
	I1108 10:36:51.425185 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt.e7c39ab7: {Name:mk99ba7afd201b9721628697cdfbc9c598ef2418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:51.425404 1226201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key.e7c39ab7 ...
	I1108 10:36:51.425421 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key.e7c39ab7: {Name:mk899e4719f05dba7ca71c6be10914edd026ecfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:51.425552 1226201 certs.go:382] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt.e7c39ab7 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt
	I1108 10:36:51.425675 1226201 certs.go:386] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key.e7c39ab7 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key
	I1108 10:36:51.425763 1226201 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key
	I1108 10:36:51.425809 1226201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.crt with IP's: []
	I1108 10:36:52.134144 1226201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.crt ...
	I1108 10:36:52.134178 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.crt: {Name:mk34f6f4c9db2b8e21c33b179b75d255b04c174d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:52.134427 1226201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key ...
	I1108 10:36:52.134446 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key: {Name:mkf47b2faa847274fb1ed09f673cf2f01ba6b83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:52.134694 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:36:52.134758 1226201 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:36:52.134775 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:36:52.134811 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:36:52.134858 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:36:52.134888 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:36:52.134952 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:36:52.135589 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:36:52.154478 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:36:52.173990 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:36:52.192997 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:36:52.210025 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 10:36:52.228648 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:36:52.246019 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:36:52.264059 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:36:52.282748 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:36:52.299980 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:36:52.318905 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:36:52.337601 1226201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:36:52.352567 1226201 ssh_runner.go:195] Run: openssl version
	I1108 10:36:52.362052 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:36:52.372576 1226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:36:52.377218 1226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:36:52.377310 1226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:36:52.420953 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:36:52.429452 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:36:52.438110 1226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:52.441920 1226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:52.441998 1226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:52.487792 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:36:52.496278 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:36:52.504410 1226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:36:52.508361 1226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:36:52.508464 1226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:36:52.554565 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:36:52.563257 1226201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:36:52.566928 1226201 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:36:52.567022 1226201 kubeadm.go:401] StartCluster: {Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:36:52.567116 1226201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:36:52.567187 1226201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:36:52.593774 1226201 cri.go:89] found id: ""
	I1108 10:36:52.593870 1226201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:36:52.602428 1226201 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:36:52.610325 1226201 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:36:52.610432 1226201 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:36:52.618396 1226201 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:36:52.618425 1226201 kubeadm.go:158] found existing configuration files:
	
	I1108 10:36:52.618516 1226201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:36:52.626228 1226201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:36:52.626326 1226201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:36:52.633832 1226201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:36:52.641477 1226201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:36:52.641557 1226201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:36:52.649105 1226201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:36:52.656945 1226201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:36:52.657010 1226201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:36:52.664497 1226201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:36:52.672038 1226201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:36:52.672101 1226201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:36:52.680327 1226201 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:36:52.747526 1226201 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:36:52.747841 1226201 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:36:52.844865 1226201 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Nov 08 10:36:42 embed-certs-790346 crio[650]: time="2025-11-08T10:36:42.968747653Z" level=info msg="Started container" PID=1633 containerID=dd8fae91fcb9db2903395cab846cc54cb65d2787b94c2a4c392f8fced9aedf0d description=kube-system/storage-provisioner/storage-provisioner id=fac4a862-89bc-4dbf-8045-91b7a43efd99 name=/runtime.v1.RuntimeService/StartContainer sandboxID=603c0d7a0e3c07dabb0080b0ecd24d42246742f781d98b2aa1a3876672c9aefc
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.668316182Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=59a1c58c-6bf8-45ea-80d0-500c1410bbe4 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.669811744Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ea5fcaf7-6cb0-4b05-a97e-add49c61ffde name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.6707913Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6/dashboard-metrics-scraper" id=344a787a-6af2-4c32-90cf-20cd47efb6f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.670881521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.681716815Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.682320564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.715631502Z" level=info msg="Created container 26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6/dashboard-metrics-scraper" id=344a787a-6af2-4c32-90cf-20cd47efb6f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.716829415Z" level=info msg="Starting container: 26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512" id=d7380c42-9fe0-428d-aeac-45f96b2ba79b name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.724642088Z" level=info msg="Started container" PID=1646 containerID=26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6/dashboard-metrics-scraper id=d7380c42-9fe0-428d-aeac-45f96b2ba79b name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac70989979f8b61f80ef5a076ea7c4ae76e220ebcd1305e6fa97fadf8b241f22
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.95651871Z" level=info msg="Removing container: 1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927" id=a22ede32-794d-428b-bf9f-424160938bcd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.967102312Z" level=info msg="Error loading conmon cgroup of container 1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927: cgroup deleted" id=a22ede32-794d-428b-bf9f-424160938bcd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.973339245Z" level=info msg="Removed container 1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6/dashboard-metrics-scraper" id=a22ede32-794d-428b-bf9f-424160938bcd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.531236366Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.537467483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.537629693Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.537713728Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.544838101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.544996374Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.545067206Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.551369402Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.551530176Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.551608902Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.555021584Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.555211978Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	26ff283aa3976       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   2                   ac70989979f8b       dashboard-metrics-scraper-6ffb444bf9-2bfj6   kubernetes-dashboard
	dd8fae91fcb9d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   603c0d7a0e3c0       storage-provisioner                          kube-system
	18faf513aa0bb       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   d78cdaf352819       kubernetes-dashboard-855c9754f9-xxk4p        kubernetes-dashboard
	1c61611abd023       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   97d25fec96dbe       coredns-66bc5c9577-74xnp                     kube-system
	99642b383fc0d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   603c0d7a0e3c0       storage-provisioner                          kube-system
	b25262d04ac63       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   dd03c8406009d       kindnet-8978r                                kube-system
	b6fe2b3b81b15       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   224d997255f90       busybox                                      default
	bb49cec67a688       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   2b9bf043d820b       kube-proxy-fx79j                             kube-system
	e9b1d9f7c0483       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9de3151dfd702       kube-controller-manager-embed-certs-790346   kube-system
	86097d71b8a6e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   ecca18b99343a       kube-apiserver-embed-certs-790346            kube-system
	ea89ad8d0eb68       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   36e95dd28b988       etcd-embed-certs-790346                      kube-system
	2edd058c6ccdb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   98a810fab7b5c       kube-scheduler-embed-certs-790346            kube-system
	
	
	==> coredns [1c61611abd0236cf3edd7cd7cbcb13d39c0e46247750268e93e4590ecd739144] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48128 - 52608 "HINFO IN 4205675465250288656.7918789902488336052. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012861907s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-790346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-790346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=embed-certs-790346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_34_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:34:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-790346
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:37:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:36:42 +0000   Sat, 08 Nov 2025 10:34:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:36:42 +0000   Sat, 08 Nov 2025 10:34:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:36:42 +0000   Sat, 08 Nov 2025 10:34:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:36:42 +0000   Sat, 08 Nov 2025 10:35:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-790346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                eee914a9-8e5e-440d-b038-b0a41c7677a4
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-74xnp                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-embed-certs-790346                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-8978r                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-embed-certs-790346             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-embed-certs-790346    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-fx79j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-embed-certs-790346             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2bfj6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xxk4p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m20s              kube-proxy       
	  Normal   Starting                 57s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m26s              kubelet          Node embed-certs-790346 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m26s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s              kubelet          Node embed-certs-790346 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m26s              kubelet          Node embed-certs-790346 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m26s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m22s              node-controller  Node embed-certs-790346 event: Registered Node embed-certs-790346 in Controller
	  Normal   NodeReady                99s                kubelet          Node embed-certs-790346 status is now: NodeReady
	  Normal   Starting                 65s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node embed-certs-790346 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node embed-certs-790346 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)  kubelet          Node embed-certs-790346 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node embed-certs-790346 event: Registered Node embed-certs-790346 in Controller
	
	
	==> dmesg <==
	[ +18.424643] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:36] overlayfs: idmapped layers are currently not supported
	[ +30.788294] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ea89ad8d0eb688f083aeb7d472a94d7a3f3b2063341d0ca898c464ca703d3501] <==
	{"level":"warn","ts":"2025-11-08T10:36:09.085429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.116570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.178170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.202827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.262985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.301500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.339222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.376951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.396753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.428908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.463850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.481640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.515674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.536801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.572477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.644538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.675724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.724490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.747775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.780031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.839777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.884162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.940758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:10.022993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:10.190494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48306","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:10 up  9:19,  0 user,  load average: 4.34, 3.89, 3.14
	Linux embed-certs-790346 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b25262d04ac63153c5449a4717cac831ae0adffd457e2b4ff0b7e0902f0792e0] <==
	I1108 10:36:12.337155       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:36:12.337581       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:36:12.340725       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:36:12.340750       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:36:12.340765       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:36:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:36:12.533055       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:36:12.533131       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:36:12.533165       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:36:12.534157       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:36:42.531771       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:36:42.534210       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:36:42.534373       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:36:42.534502       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:36:44.233309       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:36:44.233337       1 metrics.go:72] Registering metrics
	I1108 10:36:44.233403       1 controller.go:711] "Syncing nftables rules"
	I1108 10:36:52.530940       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:36:52.531003       1 main.go:301] handling current node
	I1108 10:37:02.536563       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:37:02.536692       1 main.go:301] handling current node
	
	
	==> kube-apiserver [86097d71b8a6e43eb320fe2cd739591210e92690c38263951b284aa8c7ee0039] <==
	I1108 10:36:11.446842       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:36:11.446858       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:36:11.446961       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:36:11.447039       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:36:11.447079       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:36:11.447297       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:36:11.447391       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 10:36:11.448509       1 aggregator.go:171] initial CRD sync complete...
	I1108 10:36:11.448524       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:36:11.448529       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:36:11.448535       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:36:11.465680       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:36:11.477905       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1108 10:36:11.490815       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:36:11.733207       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:36:11.949535       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:36:12.403163       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:36:12.492386       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:36:12.555619       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:36:12.574637       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:36:12.653860       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.68.113"}
	I1108 10:36:12.673277       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.107.185"}
	I1108 10:36:14.614227       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:36:14.982418       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:36:15.084538       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e9b1d9f7c0483027ded3f21b252ced6d355c5f322e03b235441f038ad56cee88] <==
	I1108 10:36:14.613337       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:36:14.613344       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:36:14.616127       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:36:14.616227       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:36:14.616309       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-790346"
	I1108 10:36:14.616357       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:36:14.623378       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:36:14.624337       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:36:14.624362       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:36:14.628778       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:36:14.628872       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 10:36:14.629240       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:36:14.629451       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:36:14.629533       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:36:14.629663       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:36:14.630016       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:36:14.631144       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:36:14.631215       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:36:14.632597       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 10:36:14.635903       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 10:36:14.642803       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 10:36:14.646308       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:36:14.653074       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:36:14.655310       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:36:14.662580       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bb49cec67a688ecce92db1dcf1da23dc04e0dab933a76690980178360b633df1] <==
	I1108 10:36:12.486947       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:36:12.681481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:36:12.782410       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:36:12.782452       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:36:12.782545       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:36:12.870483       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:36:12.870601       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:36:12.893420       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:36:12.894307       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:36:12.894578       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:36:12.896667       1 config.go:200] "Starting service config controller"
	I1108 10:36:12.896748       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:36:12.896800       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:36:12.896872       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:36:12.896938       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:36:12.896983       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:36:12.897676       1 config.go:309] "Starting node config controller"
	I1108 10:36:12.899183       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:36:12.899233       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:36:12.997723       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:36:12.997734       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:36:12.997752       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2edd058c6ccdbae4d8675a306904465a1fe93113e0e01793a923f585b98be4d2] <==
	I1108 10:36:09.398554       1 serving.go:386] Generated self-signed cert in-memory
	W1108 10:36:11.208912       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 10:36:11.208945       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 10:36:11.208956       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 10:36:11.208963       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 10:36:11.330672       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:36:11.330776       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:36:11.337546       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:36:11.340728       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:36:11.360151       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:36:11.340837       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:36:11.461046       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:36:15 embed-certs-790346 kubelet[777]: W1108 10:36:15.577100     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/crio-d78cdaf352819c36c696ed49e8f9fe70017bf76c245c0a718d7a459f78ebd97a WatchSource:0}: Error finding container d78cdaf352819c36c696ed49e8f9fe70017bf76c245c0a718d7a459f78ebd97a: Status 404 returned error can't find the container with id d78cdaf352819c36c696ed49e8f9fe70017bf76c245c0a718d7a459f78ebd97a
	Nov 08 10:36:15 embed-certs-790346 kubelet[777]: W1108 10:36:15.596125     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/crio-ac70989979f8b61f80ef5a076ea7c4ae76e220ebcd1305e6fa97fadf8b241f22 WatchSource:0}: Error finding container ac70989979f8b61f80ef5a076ea7c4ae76e220ebcd1305e6fa97fadf8b241f22: Status 404 returned error can't find the container with id ac70989979f8b61f80ef5a076ea7c4ae76e220ebcd1305e6fa97fadf8b241f22
	Nov 08 10:36:21 embed-certs-790346 kubelet[777]: I1108 10:36:21.093345     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 10:36:23 embed-certs-790346 kubelet[777]: I1108 10:36:23.967759     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xxk4p" podStartSLOduration=1.447146415 podStartE2EDuration="8.966298524s" podCreationTimestamp="2025-11-08 10:36:15 +0000 UTC" firstStartedPulling="2025-11-08 10:36:15.583440935 +0000 UTC m=+10.139382393" lastFinishedPulling="2025-11-08 10:36:23.102593044 +0000 UTC m=+17.658534502" observedRunningTime="2025-11-08 10:36:23.894709541 +0000 UTC m=+18.450651048" watchObservedRunningTime="2025-11-08 10:36:23.966298524 +0000 UTC m=+18.522239990"
	Nov 08 10:36:30 embed-certs-790346 kubelet[777]: I1108 10:36:30.891209     777 scope.go:117] "RemoveContainer" containerID="30f5ccacfd1ec04945cb0496a2a6e8a099e5a226edae73a7e085441fd8c49e93"
	Nov 08 10:36:31 embed-certs-790346 kubelet[777]: I1108 10:36:31.897306     777 scope.go:117] "RemoveContainer" containerID="1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927"
	Nov 08 10:36:31 embed-certs-790346 kubelet[777]: E1108 10:36:31.897471     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:36:31 embed-certs-790346 kubelet[777]: I1108 10:36:31.898519     777 scope.go:117] "RemoveContainer" containerID="30f5ccacfd1ec04945cb0496a2a6e8a099e5a226edae73a7e085441fd8c49e93"
	Nov 08 10:36:32 embed-certs-790346 kubelet[777]: I1108 10:36:32.901184     777 scope.go:117] "RemoveContainer" containerID="1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927"
	Nov 08 10:36:32 embed-certs-790346 kubelet[777]: E1108 10:36:32.901354     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:36:35 embed-certs-790346 kubelet[777]: I1108 10:36:35.550512     777 scope.go:117] "RemoveContainer" containerID="1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927"
	Nov 08 10:36:35 embed-certs-790346 kubelet[777]: E1108 10:36:35.550707     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:36:42 embed-certs-790346 kubelet[777]: I1108 10:36:42.929153     777 scope.go:117] "RemoveContainer" containerID="99642b383fc0dec33ad2ce8f0c4a4ffe1b697e862a2493a131dd3ed36626da5e"
	Nov 08 10:36:50 embed-certs-790346 kubelet[777]: I1108 10:36:50.667800     777 scope.go:117] "RemoveContainer" containerID="1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927"
	Nov 08 10:36:50 embed-certs-790346 kubelet[777]: I1108 10:36:50.951417     777 scope.go:117] "RemoveContainer" containerID="1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927"
	Nov 08 10:36:50 embed-certs-790346 kubelet[777]: I1108 10:36:50.951770     777 scope.go:117] "RemoveContainer" containerID="26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512"
	Nov 08 10:36:50 embed-certs-790346 kubelet[777]: E1108 10:36:50.951943     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:36:55 embed-certs-790346 kubelet[777]: I1108 10:36:55.550424     777 scope.go:117] "RemoveContainer" containerID="26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512"
	Nov 08 10:36:55 embed-certs-790346 kubelet[777]: E1108 10:36:55.551246     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:37:05 embed-certs-790346 kubelet[777]: E1108 10:37:05.924256     777 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/crio-36e95dd28b9882cf357f3f321d33c57a00d7b9bd20e736dd292293b2307f40a9\": RecentStats: unable to find data in memory cache]"
	Nov 08 10:37:06 embed-certs-790346 kubelet[777]: I1108 10:37:06.667024     777 scope.go:117] "RemoveContainer" containerID="26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512"
	Nov 08 10:37:06 embed-certs-790346 kubelet[777]: E1108 10:37:06.667229     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:37:06 embed-certs-790346 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:37:06 embed-certs-790346 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:37:06 embed-certs-790346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [18faf513aa0bb244af506a701456ddf2f5242f9fb0fceca3afc5ff31f5ff8f5e] <==
	2025/11/08 10:36:23 Using namespace: kubernetes-dashboard
	2025/11/08 10:36:23 Using in-cluster config to connect to apiserver
	2025/11/08 10:36:23 Using secret token for csrf signing
	2025/11/08 10:36:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:36:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:36:23 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:36:23 Generating JWE encryption key
	2025/11/08 10:36:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:36:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:36:24 Initializing JWE encryption key from synchronized object
	2025/11/08 10:36:24 Creating in-cluster Sidecar client
	2025/11/08 10:36:24 Serving insecurely on HTTP port: 9090
	2025/11/08 10:36:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:36:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:36:23 Starting overwatch
	
	
	==> storage-provisioner [99642b383fc0dec33ad2ce8f0c4a4ffe1b697e862a2493a131dd3ed36626da5e] <==
	I1108 10:36:12.365624       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:36:42.382436       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [dd8fae91fcb9db2903395cab846cc54cb65d2787b94c2a4c392f8fced9aedf0d] <==
	I1108 10:36:43.006354       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:36:43.006418       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:36:43.012904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:46.468049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:50.765793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:54.377268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:57.431601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:00.455428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:00.462335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:37:00.462629       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:37:00.465687       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-790346_3fef511d-0992-4c37-933f-ffd4f2753bd5!
	I1108 10:37:00.475137       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58b9af2b-5b91-43b5-9be5-4a96191976d2", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-790346_3fef511d-0992-4c37-933f-ffd4f2753bd5 became leader
	W1108 10:37:00.484725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:00.491181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:37:00.566248       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-790346_3fef511d-0992-4c37-933f-ffd4f2753bd5!
	W1108 10:37:02.494639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:02.501724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:04.505610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:04.512720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:06.525338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:06.538547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:08.542495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:08.552838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:10.564516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:10.579073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790346 -n embed-certs-790346
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790346 -n embed-certs-790346: exit status 2 (462.58054ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-790346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-790346
helpers_test.go:243: (dbg) docker inspect embed-certs-790346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7",
	        "Created": "2025-11-08T10:34:14.160209579Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1222886,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:35:58.171747853Z",
	            "FinishedAt": "2025-11-08T10:35:57.352811293Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/hostname",
	        "HostsPath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/hosts",
	        "LogPath": "/var/lib/docker/containers/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7-json.log",
	        "Name": "/embed-certs-790346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-790346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-790346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7",
	                "LowerDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12ff454229070a09f9f9807b3abd185e295db819685091c00fe386eea2d0d512/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-790346",
	                "Source": "/var/lib/docker/volumes/embed-certs-790346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-790346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-790346",
	                "name.minikube.sigs.k8s.io": "embed-certs-790346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "990617da7afb303e4cf8c211732d106eeb42ef18848e326919dde8831cc39856",
	            "SandboxKey": "/var/run/docker/netns/990617da7afb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34532"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34533"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34536"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34534"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34535"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-790346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:fd:7a:a1:60:06",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d495b48ffde5b28a4ff62dc6240c1429227e085b124c5835b7607c15b8bf3dd5",
	                    "EndpointID": "43d90fa0a03078c26c56ed8c6be4c86ef1a8f22fc238b84f15421feeb8a3e062",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-790346",
	                        "c42811f48049"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790346 -n embed-certs-790346
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790346 -n embed-certs-790346: exit status 2 (454.34183ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-790346 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-790346 logs -n 25: (1.758753209s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:32 UTC │ 08 Nov 25 10:32 UTC │
	│ image   │ old-k8s-version-171136 image list --format=json                                                                                                                                                                                               │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-171136 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-837698                                                                                                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-236075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-236075 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-236075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-790346 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-790346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ image   │ default-k8s-diff-port-236075 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ pause   │ -p default-k8s-diff-port-236075 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-553553                                                                                                                                                                                                               │ disable-driver-mounts-553553 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	│ image   │ embed-certs-790346 image list --format=json                                                                                                                                                                                                   │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-790346 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:36:25
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:36:25.941677 1226201 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:36:25.941777 1226201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:36:25.941782 1226201 out.go:374] Setting ErrFile to fd 2...
	I1108 10:36:25.941834 1226201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:36:25.942086 1226201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:36:25.942486 1226201 out.go:368] Setting JSON to false
	I1108 10:36:25.944473 1226201 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33531,"bootTime":1762564655,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:36:25.944541 1226201 start.go:143] virtualization:  
	I1108 10:36:25.948664 1226201 out.go:179] * [no-preload-291044] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:36:25.953213 1226201 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:36:25.953267 1226201 notify.go:221] Checking for updates...
	I1108 10:36:25.960527 1226201 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:36:25.963771 1226201 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:36:25.967203 1226201 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:36:25.970527 1226201 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:36:25.973881 1226201 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:36:25.977660 1226201 config.go:182] Loaded profile config "embed-certs-790346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:36:25.977806 1226201 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:36:26.020311 1226201 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:36:26.020510 1226201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:36:26.149774 1226201 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:36:26.138261978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:36:26.149898 1226201 docker.go:319] overlay module found
	I1108 10:36:26.155544 1226201 out.go:179] * Using the docker driver based on user configuration
	I1108 10:36:26.159200 1226201 start.go:309] selected driver: docker
	I1108 10:36:26.159222 1226201 start.go:930] validating driver "docker" against <nil>
	I1108 10:36:26.159252 1226201 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:36:26.159983 1226201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:36:26.286276 1226201 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:36:26.275188875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:36:26.286431 1226201 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:36:26.286663 1226201 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:36:26.291168 1226201 out.go:179] * Using Docker driver with root privileges
	I1108 10:36:26.294493 1226201 cni.go:84] Creating CNI manager for ""
	I1108 10:36:26.294561 1226201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:36:26.294571 1226201 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:36:26.294651 1226201 start.go:353] cluster config:
	{Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:36:26.298100 1226201 out.go:179] * Starting "no-preload-291044" primary control-plane node in "no-preload-291044" cluster
	I1108 10:36:26.301403 1226201 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:36:26.304723 1226201 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:36:26.307996 1226201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:36:26.308000 1226201 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:36:26.308146 1226201 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/config.json ...
	I1108 10:36:26.308177 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/config.json: {Name:mk712d9c640d8e5ee04268d7bb1adec91ec48f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:26.308383 1226201 cache.go:107] acquiring lock: {Name:mk8513c6159258582048bf022eb3626495f0ef99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.308479 1226201 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 10:36:26.308494 1226201 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 119.783µs
	I1108 10:36:26.308503 1226201 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 10:36:26.308524 1226201 cache.go:107] acquiring lock: {Name:mkfbe116f289c09e7f023243a3e334812266f562 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.308620 1226201 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:26.308814 1226201 cache.go:107] acquiring lock: {Name:mkab778ec210a01a148a027551ae4dd6f48ac681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.308898 1226201 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:26.309029 1226201 cache.go:107] acquiring lock: {Name:mk7e5c4997cde36ed0e08a0661a5a5dfada4e032 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.309100 1226201 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:26.309221 1226201 cache.go:107] acquiring lock: {Name:mkc673276c059e1336edcaed46b38c8432a558c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.309285 1226201 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:26.309385 1226201 cache.go:107] acquiring lock: {Name:mk0c87ccf4c259c637cc851ae066ca5ca4e4afa3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.309445 1226201 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1108 10:36:26.309540 1226201 cache.go:107] acquiring lock: {Name:mkde9e8ad2f329aff2c9e641a9eec6a25ba01057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.309604 1226201 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:26.309694 1226201 cache.go:107] acquiring lock: {Name:mkfd6f0a7827507a867318ffa03b1f88753d73c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.309760 1226201 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:26.311074 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:26.311539 1226201 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:26.311730 1226201 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:26.312014 1226201 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1108 10:36:26.312192 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:26.312360 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:26.312560 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:26.338404 1226201 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:36:26.338430 1226201 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:36:26.338444 1226201 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:36:26.338467 1226201 start.go:360] acquireMachinesLock for no-preload-291044: {Name:mkddf61b3e3a9309635e3814dcc2626dcf0ac06a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:36:26.338561 1226201 start.go:364] duration metric: took 75.189µs to acquireMachinesLock for "no-preload-291044"
	I1108 10:36:26.338590 1226201 start.go:93] Provisioning new machine with config: &{Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:36:26.338667 1226201 start.go:125] createHost starting for "" (driver="docker")
	W1108 10:36:23.856615 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:25.859049 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:27.863049 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:26.344847 1226201 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:36:26.345095 1226201 start.go:159] libmachine.API.Create for "no-preload-291044" (driver="docker")
	I1108 10:36:26.345132 1226201 client.go:173] LocalClient.Create starting
	I1108 10:36:26.345197 1226201 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem
	I1108 10:36:26.345231 1226201 main.go:143] libmachine: Decoding PEM data...
	I1108 10:36:26.345244 1226201 main.go:143] libmachine: Parsing certificate...
	I1108 10:36:26.345287 1226201 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem
	I1108 10:36:26.345304 1226201 main.go:143] libmachine: Decoding PEM data...
	I1108 10:36:26.345313 1226201 main.go:143] libmachine: Parsing certificate...
	I1108 10:36:26.345657 1226201 cli_runner.go:164] Run: docker network inspect no-preload-291044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:36:26.383116 1226201 cli_runner.go:211] docker network inspect no-preload-291044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:36:26.383201 1226201 network_create.go:284] running [docker network inspect no-preload-291044] to gather additional debugging logs...
	I1108 10:36:26.383216 1226201 cli_runner.go:164] Run: docker network inspect no-preload-291044
	W1108 10:36:26.413064 1226201 cli_runner.go:211] docker network inspect no-preload-291044 returned with exit code 1
	I1108 10:36:26.413098 1226201 network_create.go:287] error running [docker network inspect no-preload-291044]: docker network inspect no-preload-291044: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-291044 not found
	I1108 10:36:26.413110 1226201 network_create.go:289] output of [docker network inspect no-preload-291044]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-291044 not found
	
	** /stderr **
	I1108 10:36:26.413196 1226201 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:36:26.441097 1226201 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f127b1978c3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:c7:37:65:8c:96} reservation:<nil>}
	I1108 10:36:26.441417 1226201 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b98bf73d2e94 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:99:be:46:ea:86} reservation:<nil>}
	I1108 10:36:26.441826 1226201 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c4df73992be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:ad:c1:c0:ea:6d} reservation:<nil>}
	I1108 10:36:26.442077 1226201 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d495b48ffde5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:ac:97:fe:92:64} reservation:<nil>}
	I1108 10:36:26.443253 1226201 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019474d0}
	I1108 10:36:26.443328 1226201 network_create.go:124] attempt to create docker network no-preload-291044 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 10:36:26.443420 1226201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-291044 no-preload-291044
	I1108 10:36:26.536794 1226201 network_create.go:108] docker network no-preload-291044 192.168.85.0/24 created
	I1108 10:36:26.536879 1226201 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-291044" container
	I1108 10:36:26.536993 1226201 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:36:26.561011 1226201 cli_runner.go:164] Run: docker volume create no-preload-291044 --label name.minikube.sigs.k8s.io=no-preload-291044 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:36:26.589777 1226201 oci.go:103] Successfully created a docker volume no-preload-291044
	I1108 10:36:26.589857 1226201 cli_runner.go:164] Run: docker run --rm --name no-preload-291044-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-291044 --entrypoint /usr/bin/test -v no-preload-291044:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:36:26.639461 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1108 10:36:26.662227 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1108 10:36:26.680739 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1108 10:36:26.683418 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1108 10:36:26.686954 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1108 10:36:26.705368 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1108 10:36:26.705393 1226201 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 396.008663ms
	I1108 10:36:26.705405 1226201 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1108 10:36:26.711393 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1108 10:36:26.744137 1226201 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1108 10:36:27.079158 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1108 10:36:27.079187 1226201 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 769.96591ms
	I1108 10:36:27.079200 1226201 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1108 10:36:27.639645 1226201 cli_runner.go:217] Completed: docker run --rm --name no-preload-291044-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-291044 --entrypoint /usr/bin/test -v no-preload-291044:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (1.049749079s)
	I1108 10:36:27.640519 1226201 oci.go:107] Successfully prepared a docker volume no-preload-291044
	I1108 10:36:27.640571 1226201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1108 10:36:27.640715 1226201 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:36:27.641292 1226201 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:36:27.756693 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1108 10:36:27.756727 1226201 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.447700896s
	I1108 10:36:27.756741 1226201 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1108 10:36:27.757763 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1108 10:36:27.757800 1226201 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.44810808s
	I1108 10:36:27.757812 1226201 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1108 10:36:27.847682 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1108 10:36:27.847711 1226201 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.538899672s
	I1108 10:36:27.847724 1226201 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1108 10:36:27.885535 1226201 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-291044 --name no-preload-291044 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-291044 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-291044 --network no-preload-291044 --ip 192.168.85.2 --volume no-preload-291044:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:36:28.002974 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1108 10:36:28.003516 1226201 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.694984206s
	I1108 10:36:28.003534 1226201 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1108 10:36:28.528908 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Running}}
	I1108 10:36:28.571303 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:36:28.603948 1226201 cli_runner.go:164] Run: docker exec no-preload-291044 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:36:28.696764 1226201 oci.go:144] the created container "no-preload-291044" has a running status.
	I1108 10:36:28.696800 1226201 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa...
	I1108 10:36:28.950439 1226201 cache.go:157] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1108 10:36:28.950525 1226201 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.640984663s
	I1108 10:36:28.950554 1226201 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1108 10:36:28.953578 1226201 cache.go:87] Successfully saved all images to host disk.
	I1108 10:36:29.137687 1226201 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:36:29.162377 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:36:29.189754 1226201 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:36:29.189774 1226201 kic_runner.go:114] Args: [docker exec --privileged no-preload-291044 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:36:29.255610 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:36:29.275785 1226201 machine.go:94] provisionDockerMachine start ...
	I1108 10:36:29.275909 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:29.298213 1226201 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:29.298569 1226201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1108 10:36:29.298580 1226201 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:36:29.299335 1226201 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54184->127.0.0.1:34537: read: connection reset by peer
	W1108 10:36:30.353687 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:32.354549 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:32.464112 1226201 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-291044
	
	I1108 10:36:32.464137 1226201 ubuntu.go:182] provisioning hostname "no-preload-291044"
	I1108 10:36:32.464203 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:32.482229 1226201 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:32.482543 1226201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1108 10:36:32.482561 1226201 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-291044 && echo "no-preload-291044" | sudo tee /etc/hostname
	I1108 10:36:32.642542 1226201 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-291044
	
	I1108 10:36:32.642626 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:32.660721 1226201 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:32.661051 1226201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1108 10:36:32.661075 1226201 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-291044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-291044/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-291044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:36:32.812571 1226201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:36:32.812598 1226201 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:36:32.812623 1226201 ubuntu.go:190] setting up certificates
	I1108 10:36:32.812632 1226201 provision.go:84] configureAuth start
	I1108 10:36:32.812693 1226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:36:32.835184 1226201 provision.go:143] copyHostCerts
	I1108 10:36:32.835262 1226201 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:36:32.835276 1226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:36:32.835360 1226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:36:32.835469 1226201 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:36:32.835480 1226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:36:32.835510 1226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:36:32.835569 1226201 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:36:32.835578 1226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:36:32.835605 1226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:36:32.835656 1226201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.no-preload-291044 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-291044]
	I1108 10:36:33.257005 1226201 provision.go:177] copyRemoteCerts
	I1108 10:36:33.257073 1226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:36:33.257124 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:33.274760 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:36:33.381132 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:36:33.400553 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:36:33.422403 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:36:33.443177 1226201 provision.go:87] duration metric: took 630.530919ms to configureAuth
	I1108 10:36:33.443206 1226201 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:36:33.443433 1226201 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:36:33.443550 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:33.461150 1226201 main.go:143] libmachine: Using SSH client type: native
	I1108 10:36:33.461455 1226201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1108 10:36:33.461478 1226201 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:36:33.810488 1226201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:36:33.810511 1226201 machine.go:97] duration metric: took 4.534685014s to provisionDockerMachine
	I1108 10:36:33.810523 1226201 client.go:176] duration metric: took 7.465384358s to LocalClient.Create
	I1108 10:36:33.810537 1226201 start.go:167] duration metric: took 7.465444935s to libmachine.API.Create "no-preload-291044"
	I1108 10:36:33.810549 1226201 start.go:293] postStartSetup for "no-preload-291044" (driver="docker")
	I1108 10:36:33.810562 1226201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:36:33.810630 1226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:36:33.810675 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:33.827801 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:36:33.933052 1226201 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:36:33.936389 1226201 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:36:33.936419 1226201 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:36:33.936429 1226201 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:36:33.936516 1226201 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:36:33.936598 1226201 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:36:33.936704 1226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:36:33.944363 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:36:33.963188 1226201 start.go:296] duration metric: took 152.622906ms for postStartSetup
	I1108 10:36:33.963600 1226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:36:33.980317 1226201 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/config.json ...
	I1108 10:36:33.980649 1226201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:36:33.980703 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:33.998230 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:36:34.101642 1226201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:36:34.106493 1226201 start.go:128] duration metric: took 7.767808628s to createHost
	I1108 10:36:34.106518 1226201 start.go:83] releasing machines lock for "no-preload-291044", held for 7.767946297s
	I1108 10:36:34.106595 1226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:36:34.124085 1226201 ssh_runner.go:195] Run: cat /version.json
	I1108 10:36:34.124145 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:34.124380 1226201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:36:34.124464 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:36:34.146152 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:36:34.148293 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:36:34.252180 1226201 ssh_runner.go:195] Run: systemctl --version
	I1108 10:36:34.352014 1226201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:36:34.394062 1226201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:36:34.398799 1226201 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:36:34.398870 1226201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:36:34.430534 1226201 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:36:34.430561 1226201 start.go:496] detecting cgroup driver to use...
	I1108 10:36:34.430593 1226201 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:36:34.430662 1226201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:36:34.449455 1226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:36:34.462319 1226201 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:36:34.462382 1226201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:36:34.480246 1226201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:36:34.499094 1226201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:36:34.630200 1226201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:36:34.759993 1226201 docker.go:234] disabling docker service ...
	I1108 10:36:34.760113 1226201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:36:34.784931 1226201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:36:34.799843 1226201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:36:34.938905 1226201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:36:35.076128 1226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:36:35.089937 1226201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:36:35.107302 1226201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:36:35.107372 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.116543 1226201 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:36:35.116629 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.125982 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.134833 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.144005 1226201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:36:35.152281 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.160757 1226201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.174964 1226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:36:35.184740 1226201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:36:35.192280 1226201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:36:35.199747 1226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:36:35.315693 1226201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:36:35.453083 1226201 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:36:35.453161 1226201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:36:35.457394 1226201 start.go:564] Will wait 60s for crictl version
	I1108 10:36:35.457458 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:35.461178 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:36:35.490376 1226201 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:36:35.490481 1226201 ssh_runner.go:195] Run: crio --version
	I1108 10:36:35.520793 1226201 ssh_runner.go:195] Run: crio --version
	I1108 10:36:35.561796 1226201 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:36:35.564577 1226201 cli_runner.go:164] Run: docker network inspect no-preload-291044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:36:35.580749 1226201 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:36:35.587785 1226201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:36:35.597691 1226201 kubeadm.go:884] updating cluster {Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:36:35.597805 1226201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:36:35.597851 1226201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:36:35.621144 1226201 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1108 10:36:35.621170 1226201 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 10:36:35.621205 1226201 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:35.621397 1226201 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:35.621487 1226201 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:35.621567 1226201 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:35.621653 1226201 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:35.621737 1226201 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1108 10:36:35.621820 1226201 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:35.621909 1226201 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:35.623257 1226201 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:35.623526 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:35.623693 1226201 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1108 10:36:35.623845 1226201 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:35.623995 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:35.624167 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:35.624322 1226201 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:35.624690 1226201 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:35.874518 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:35.897516 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:35.900589 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1108 10:36:35.916481 1226201 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1108 10:36:35.916588 1226201 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:35.916668 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:35.921430 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:35.937577 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	W1108 10:36:34.853457 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:36.855507 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:35.951342 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:35.953422 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:35.977996 1226201 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1108 10:36:35.978048 1226201 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:35.978105 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.034730 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:36.036274 1226201 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1108 10:36:36.036363 1226201 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1108 10:36:36.036474 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.058537 1226201 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1108 10:36:36.058580 1226201 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:36.058632 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.058689 1226201 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1108 10:36:36.058706 1226201 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:36.058746 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.058820 1226201 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1108 10:36:36.058837 1226201 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:36.058857 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.067380 1226201 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1108 10:36:36.067425 1226201 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:36.067486 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:36.067607 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:36.095615 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:36.095735 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 10:36:36.095828 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:36.095881 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:36.095939 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:36.115765 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:36.115933 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:36.212090 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 10:36:36.212235 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 10:36:36.212358 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:36.212512 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:36.212639 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:36.235124 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1108 10:36:36.235279 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:36.313417 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 10:36:36.313587 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 10:36:36.313704 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1108 10:36:36.313819 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1108 10:36:36.313944 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 10:36:36.314060 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 10:36:36.342212 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1108 10:36:36.342341 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 10:36:36.342432 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 10:36:36.418494 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1108 10:36:36.418605 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1108 10:36:36.418673 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1108 10:36:36.418721 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 10:36:36.418764 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1108 10:36:36.418827 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1108 10:36:36.418872 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1108 10:36:36.418891 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1108 10:36:36.418931 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1108 10:36:36.418995 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 10:36:36.419043 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1108 10:36:36.419086 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 10:36:36.419128 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1108 10:36:36.419145 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1108 10:36:36.454407 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1108 10:36:36.454645 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1108 10:36:36.454671 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1108 10:36:36.454792 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1108 10:36:36.454810 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1108 10:36:36.454879 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1108 10:36:36.454908 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1108 10:36:36.454987 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1108 10:36:36.455004 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1108 10:36:36.454452 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1108 10:36:36.621252 1226201 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1108 10:36:36.621497 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1108 10:36:37.038118 1226201 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1108 10:36:37.038434 1226201 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:37.079792 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1108 10:36:37.079830 1226201 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 10:36:37.079906 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 10:36:37.144752 1226201 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1108 10:36:37.144869 1226201 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:37.144950 1226201 ssh_runner.go:195] Run: which crictl
	I1108 10:36:38.801005 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.721056189s)
	I1108 10:36:38.801036 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1108 10:36:38.801054 1226201 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1108 10:36:38.801079 1226201 ssh_runner.go:235] Completed: which crictl: (1.65605994s)
	I1108 10:36:38.801102 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1108 10:36:38.801170 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:40.614613 1226201 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.813402835s)
	I1108 10:36:40.614690 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:36:40.614878 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.813747252s)
	I1108 10:36:40.614895 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1108 10:36:40.614920 1226201 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 10:36:40.614964 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 10:36:40.641598 1226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1108 10:36:39.355099 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:41.356007 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:41.980486 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.365460223s)
	I1108 10:36:41.980512 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1108 10:36:41.980531 1226201 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 10:36:41.980577 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 10:36:41.980656 1226201 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.338996867s)
	I1108 10:36:41.980685 1226201 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1108 10:36:41.980746 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1108 10:36:43.291943 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.311336485s)
	I1108 10:36:43.291971 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1108 10:36:43.291990 1226201 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 10:36:43.292038 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 10:36:43.292115 1226201 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.311351113s)
	I1108 10:36:43.292136 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1108 10:36:43.292152 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1108 10:36:44.651187 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.359125215s)
	I1108 10:36:44.651213 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1108 10:36:44.651261 1226201 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1108 10:36:44.651312 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1108 10:36:43.854699 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	W1108 10:36:46.354318 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:48.421823 1226201 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.770481142s)
	I1108 10:36:48.421849 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1108 10:36:48.421867 1226201 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1108 10:36:48.421913 1226201 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1108 10:36:48.981158 1226201 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1108 10:36:48.981198 1226201 cache_images.go:125] Successfully loaded all cached images
	I1108 10:36:48.981204 1226201 cache_images.go:94] duration metric: took 13.360021521s to LoadCachedImages
	I1108 10:36:48.981215 1226201 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 10:36:48.981326 1226201 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-291044 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:36:48.981408 1226201 ssh_runner.go:195] Run: crio config
	I1108 10:36:49.058278 1226201 cni.go:84] Creating CNI manager for ""
	I1108 10:36:49.058351 1226201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:36:49.058385 1226201 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:36:49.058438 1226201 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-291044 NodeName:no-preload-291044 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:36:49.058615 1226201 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-291044"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:36:49.058739 1226201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:36:49.067397 1226201 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1108 10:36:49.067501 1226201 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1108 10:36:49.075657 1226201 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1108 10:36:49.075917 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1108 10:36:49.076538 1226201 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1108 10:36:49.076686 1226201 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1108 10:36:49.080787 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1108 10:36:49.080826 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1108 10:36:49.780901 1226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:36:49.801064 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1108 10:36:49.806025 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1108 10:36:49.806117 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1108 10:36:49.895424 1226201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1108 10:36:49.904976 1226201 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1108 10:36:49.905067 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1108 10:36:50.469814 1226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:36:50.478356 1226201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 10:36:50.492169 1226201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:36:50.506985 1226201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 10:36:50.521491 1226201 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:36:50.525365 1226201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:36:50.535827 1226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:36:50.653600 1226201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:36:50.689572 1226201 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044 for IP: 192.168.85.2
	I1108 10:36:50.689591 1226201 certs.go:195] generating shared ca certs ...
	I1108 10:36:50.689607 1226201 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:50.689741 1226201 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:36:50.689783 1226201 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:36:50.689790 1226201 certs.go:257] generating profile certs ...
	I1108 10:36:50.689845 1226201 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.key
	I1108 10:36:50.689855 1226201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt with IP's: []
	W1108 10:36:48.853130 1222758 pod_ready.go:104] pod "coredns-66bc5c9577-74xnp" is not "Ready", error: <nil>
	I1108 10:36:51.353398 1222758 pod_ready.go:94] pod "coredns-66bc5c9577-74xnp" is "Ready"
	I1108 10:36:51.353420 1222758 pod_ready.go:86] duration metric: took 38.50605044s for pod "coredns-66bc5c9577-74xnp" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.356557 1222758 pod_ready.go:83] waiting for pod "etcd-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.361486 1222758 pod_ready.go:94] pod "etcd-embed-certs-790346" is "Ready"
	I1108 10:36:51.361507 1222758 pod_ready.go:86] duration metric: took 4.928624ms for pod "etcd-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.364937 1222758 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.375138 1222758 pod_ready.go:94] pod "kube-apiserver-embed-certs-790346" is "Ready"
	I1108 10:36:51.375160 1222758 pod_ready.go:86] duration metric: took 10.203511ms for pod "kube-apiserver-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.382259 1222758 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.551480 1222758 pod_ready.go:94] pod "kube-controller-manager-embed-certs-790346" is "Ready"
	I1108 10:36:51.551557 1222758 pod_ready.go:86] duration metric: took 169.224325ms for pod "kube-controller-manager-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:51.751419 1222758 pod_ready.go:83] waiting for pod "kube-proxy-fx79j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:52.151248 1222758 pod_ready.go:94] pod "kube-proxy-fx79j" is "Ready"
	I1108 10:36:52.151329 1222758 pod_ready.go:86] duration metric: took 399.838209ms for pod "kube-proxy-fx79j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:52.351673 1222758 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:52.751743 1222758 pod_ready.go:94] pod "kube-scheduler-embed-certs-790346" is "Ready"
	I1108 10:36:52.751775 1222758 pod_ready.go:86] duration metric: took 400.020833ms for pod "kube-scheduler-embed-certs-790346" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:36:52.751790 1222758 pod_ready.go:40] duration metric: took 39.911499587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:36:52.818557 1222758 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:36:52.822152 1222758 out.go:179] * Done! kubectl is now configured to use "embed-certs-790346" cluster and "default" namespace by default
	I1108 10:36:51.229616 1226201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt ...
	I1108 10:36:51.229651 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: {Name:mk305b9b1018de0d9ca1d9dedc09b3076d8b16c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:51.229896 1226201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.key ...
	I1108 10:36:51.229914 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.key: {Name:mkac67e6f5d15d6f1b52a8c327f13a3c452117f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:51.230054 1226201 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key.e7c39ab7
	I1108 10:36:51.230076 1226201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt.e7c39ab7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1108 10:36:51.425151 1226201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt.e7c39ab7 ...
	I1108 10:36:51.425185 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt.e7c39ab7: {Name:mk99ba7afd201b9721628697cdfbc9c598ef2418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:51.425404 1226201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key.e7c39ab7 ...
	I1108 10:36:51.425421 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key.e7c39ab7: {Name:mk899e4719f05dba7ca71c6be10914edd026ecfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:51.425552 1226201 certs.go:382] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt.e7c39ab7 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt
	I1108 10:36:51.425675 1226201 certs.go:386] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key.e7c39ab7 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key
	I1108 10:36:51.425763 1226201 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key
	I1108 10:36:51.425809 1226201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.crt with IP's: []
	I1108 10:36:52.134144 1226201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.crt ...
	I1108 10:36:52.134178 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.crt: {Name:mk34f6f4c9db2b8e21c33b179b75d255b04c174d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:52.134427 1226201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key ...
	I1108 10:36:52.134446 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key: {Name:mkf47b2faa847274fb1ed09f673cf2f01ba6b83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:36:52.134694 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:36:52.134758 1226201 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:36:52.134775 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:36:52.134811 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:36:52.134858 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:36:52.134888 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:36:52.134952 1226201 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:36:52.135589 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:36:52.154478 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:36:52.173990 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:36:52.192997 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:36:52.210025 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 10:36:52.228648 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:36:52.246019 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:36:52.264059 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:36:52.282748 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:36:52.299980 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:36:52.318905 1226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:36:52.337601 1226201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:36:52.352567 1226201 ssh_runner.go:195] Run: openssl version
	I1108 10:36:52.362052 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:36:52.372576 1226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:36:52.377218 1226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:36:52.377310 1226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:36:52.420953 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:36:52.429452 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:36:52.438110 1226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:52.441920 1226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:52.441998 1226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:36:52.487792 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:36:52.496278 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:36:52.504410 1226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:36:52.508361 1226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:36:52.508464 1226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:36:52.554565 1226201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:36:52.563257 1226201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:36:52.566928 1226201 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:36:52.567022 1226201 kubeadm.go:401] StartCluster: {Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:36:52.567116 1226201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:36:52.567187 1226201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:36:52.593774 1226201 cri.go:89] found id: ""
	I1108 10:36:52.593870 1226201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:36:52.602428 1226201 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:36:52.610325 1226201 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:36:52.610432 1226201 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:36:52.618396 1226201 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:36:52.618425 1226201 kubeadm.go:158] found existing configuration files:
	
	I1108 10:36:52.618516 1226201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:36:52.626228 1226201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:36:52.626326 1226201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:36:52.633832 1226201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:36:52.641477 1226201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:36:52.641557 1226201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:36:52.649105 1226201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:36:52.656945 1226201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:36:52.657010 1226201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:36:52.664497 1226201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:36:52.672038 1226201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:36:52.672101 1226201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:36:52.680327 1226201 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:36:52.747526 1226201 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:36:52.747841 1226201 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:36:52.844865 1226201 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Nov 08 10:36:42 embed-certs-790346 crio[650]: time="2025-11-08T10:36:42.968747653Z" level=info msg="Started container" PID=1633 containerID=dd8fae91fcb9db2903395cab846cc54cb65d2787b94c2a4c392f8fced9aedf0d description=kube-system/storage-provisioner/storage-provisioner id=fac4a862-89bc-4dbf-8045-91b7a43efd99 name=/runtime.v1.RuntimeService/StartContainer sandboxID=603c0d7a0e3c07dabb0080b0ecd24d42246742f781d98b2aa1a3876672c9aefc
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.668316182Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=59a1c58c-6bf8-45ea-80d0-500c1410bbe4 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.669811744Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ea5fcaf7-6cb0-4b05-a97e-add49c61ffde name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.6707913Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6/dashboard-metrics-scraper" id=344a787a-6af2-4c32-90cf-20cd47efb6f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.670881521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.681716815Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.682320564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.715631502Z" level=info msg="Created container 26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6/dashboard-metrics-scraper" id=344a787a-6af2-4c32-90cf-20cd47efb6f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.716829415Z" level=info msg="Starting container: 26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512" id=d7380c42-9fe0-428d-aeac-45f96b2ba79b name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.724642088Z" level=info msg="Started container" PID=1646 containerID=26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6/dashboard-metrics-scraper id=d7380c42-9fe0-428d-aeac-45f96b2ba79b name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac70989979f8b61f80ef5a076ea7c4ae76e220ebcd1305e6fa97fadf8b241f22
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.95651871Z" level=info msg="Removing container: 1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927" id=a22ede32-794d-428b-bf9f-424160938bcd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.967102312Z" level=info msg="Error loading conmon cgroup of container 1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927: cgroup deleted" id=a22ede32-794d-428b-bf9f-424160938bcd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:36:50 embed-certs-790346 crio[650]: time="2025-11-08T10:36:50.973339245Z" level=info msg="Removed container 1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6/dashboard-metrics-scraper" id=a22ede32-794d-428b-bf9f-424160938bcd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.531236366Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.537467483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.537629693Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.537713728Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.544838101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.544996374Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.545067206Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.551369402Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.551530176Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.551608902Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.555021584Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:36:52 embed-certs-790346 crio[650]: time="2025-11-08T10:36:52.555211978Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	26ff283aa3976       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   ac70989979f8b       dashboard-metrics-scraper-6ffb444bf9-2bfj6   kubernetes-dashboard
	dd8fae91fcb9d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           30 seconds ago       Running             storage-provisioner         2                   603c0d7a0e3c0       storage-provisioner                          kube-system
	18faf513aa0bb       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   50 seconds ago       Running             kubernetes-dashboard        0                   d78cdaf352819       kubernetes-dashboard-855c9754f9-xxk4p        kubernetes-dashboard
	1c61611abd023       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   97d25fec96dbe       coredns-66bc5c9577-74xnp                     kube-system
	99642b383fc0d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   603c0d7a0e3c0       storage-provisioner                          kube-system
	b25262d04ac63       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   dd03c8406009d       kindnet-8978r                                kube-system
	b6fe2b3b81b15       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   224d997255f90       busybox                                      default
	bb49cec67a688       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   2b9bf043d820b       kube-proxy-fx79j                             kube-system
	e9b1d9f7c0483       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9de3151dfd702       kube-controller-manager-embed-certs-790346   kube-system
	86097d71b8a6e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   ecca18b99343a       kube-apiserver-embed-certs-790346            kube-system
	ea89ad8d0eb68       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   36e95dd28b988       etcd-embed-certs-790346                      kube-system
	2edd058c6ccdb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   98a810fab7b5c       kube-scheduler-embed-certs-790346            kube-system
	
	
	==> coredns [1c61611abd0236cf3edd7cd7cbcb13d39c0e46247750268e93e4590ecd739144] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48128 - 52608 "HINFO IN 4205675465250288656.7918789902488336052. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012861907s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-790346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-790346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=embed-certs-790346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_34_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:34:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-790346
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:37:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:36:42 +0000   Sat, 08 Nov 2025 10:34:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:36:42 +0000   Sat, 08 Nov 2025 10:34:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:36:42 +0000   Sat, 08 Nov 2025 10:34:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:36:42 +0000   Sat, 08 Nov 2025 10:35:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-790346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                eee914a9-8e5e-440d-b038-b0a41c7677a4
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-74xnp                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m24s
	  kube-system                 etcd-embed-certs-790346                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-8978r                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-embed-certs-790346             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-embed-certs-790346    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-fx79j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-embed-certs-790346             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2bfj6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xxk4p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m23s              kube-proxy       
	  Normal   Starting                 60s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m29s              kubelet          Node embed-certs-790346 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m29s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m29s              kubelet          Node embed-certs-790346 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m29s              kubelet          Node embed-certs-790346 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m29s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m25s              node-controller  Node embed-certs-790346 event: Registered Node embed-certs-790346 in Controller
	  Normal   NodeReady                102s               kubelet          Node embed-certs-790346 status is now: NodeReady
	  Normal   Starting                 68s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node embed-certs-790346 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node embed-certs-790346 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)  kubelet          Node embed-certs-790346 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node embed-certs-790346 event: Registered Node embed-certs-790346 in Controller
	
	
	==> dmesg <==
	[ +18.424643] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:36] overlayfs: idmapped layers are currently not supported
	[ +30.788294] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ea89ad8d0eb688f083aeb7d472a94d7a3f3b2063341d0ca898c464ca703d3501] <==
	{"level":"warn","ts":"2025-11-08T10:36:09.085429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.116570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.178170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.202827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.262985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.301500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.339222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.376951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.396753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.428908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.463850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.481640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.515674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.536801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.572477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.644538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.675724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.724490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.747775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.780031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.839777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.884162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:09.940758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:10.022993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:36:10.190494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48306","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:13 up  9:19,  0 user,  load average: 5.43, 4.13, 3.22
	Linux embed-certs-790346 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b25262d04ac63153c5449a4717cac831ae0adffd457e2b4ff0b7e0902f0792e0] <==
	I1108 10:36:12.337155       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:36:12.337581       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:36:12.340725       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:36:12.340750       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:36:12.340765       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:36:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:36:12.533055       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:36:12.533131       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:36:12.533165       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:36:12.534157       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:36:42.531771       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:36:42.534210       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:36:42.534373       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:36:42.534502       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:36:44.233309       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:36:44.233337       1 metrics.go:72] Registering metrics
	I1108 10:36:44.233403       1 controller.go:711] "Syncing nftables rules"
	I1108 10:36:52.530940       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:36:52.531003       1 main.go:301] handling current node
	I1108 10:37:02.536563       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:37:02.536692       1 main.go:301] handling current node
	I1108 10:37:12.540710       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 10:37:12.540737       1 main.go:301] handling current node
	
	
	==> kube-apiserver [86097d71b8a6e43eb320fe2cd739591210e92690c38263951b284aa8c7ee0039] <==
	I1108 10:36:11.446842       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:36:11.446858       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:36:11.446961       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:36:11.447039       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:36:11.447079       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:36:11.447297       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:36:11.447391       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 10:36:11.448509       1 aggregator.go:171] initial CRD sync complete...
	I1108 10:36:11.448524       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:36:11.448529       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:36:11.448535       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:36:11.465680       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:36:11.477905       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1108 10:36:11.490815       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:36:11.733207       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:36:11.949535       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:36:12.403163       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:36:12.492386       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:36:12.555619       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:36:12.574637       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:36:12.653860       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.68.113"}
	I1108 10:36:12.673277       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.107.185"}
	I1108 10:36:14.614227       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:36:14.982418       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:36:15.084538       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e9b1d9f7c0483027ded3f21b252ced6d355c5f322e03b235441f038ad56cee88] <==
	I1108 10:36:14.613337       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:36:14.613344       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:36:14.616127       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:36:14.616227       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:36:14.616309       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-790346"
	I1108 10:36:14.616357       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 10:36:14.623378       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:36:14.624337       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:36:14.624362       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:36:14.628778       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:36:14.628872       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 10:36:14.629240       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:36:14.629451       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:36:14.629533       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:36:14.629663       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:36:14.630016       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 10:36:14.631144       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 10:36:14.631215       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:36:14.632597       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 10:36:14.635903       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 10:36:14.642803       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 10:36:14.646308       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:36:14.653074       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:36:14.655310       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:36:14.662580       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bb49cec67a688ecce92db1dcf1da23dc04e0dab933a76690980178360b633df1] <==
	I1108 10:36:12.486947       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:36:12.681481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:36:12.782410       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:36:12.782452       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:36:12.782545       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:36:12.870483       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:36:12.870601       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:36:12.893420       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:36:12.894307       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:36:12.894578       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:36:12.896667       1 config.go:200] "Starting service config controller"
	I1108 10:36:12.896748       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:36:12.896800       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:36:12.896872       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:36:12.896938       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:36:12.896983       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:36:12.897676       1 config.go:309] "Starting node config controller"
	I1108 10:36:12.899183       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:36:12.899233       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:36:12.997723       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:36:12.997734       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:36:12.997752       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2edd058c6ccdbae4d8675a306904465a1fe93113e0e01793a923f585b98be4d2] <==
	I1108 10:36:09.398554       1 serving.go:386] Generated self-signed cert in-memory
	W1108 10:36:11.208912       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 10:36:11.208945       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 10:36:11.208956       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 10:36:11.208963       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 10:36:11.330672       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:36:11.330776       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:36:11.337546       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:36:11.340728       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:36:11.360151       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:36:11.340837       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:36:11.461046       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:36:15 embed-certs-790346 kubelet[777]: W1108 10:36:15.577100     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/crio-d78cdaf352819c36c696ed49e8f9fe70017bf76c245c0a718d7a459f78ebd97a WatchSource:0}: Error finding container d78cdaf352819c36c696ed49e8f9fe70017bf76c245c0a718d7a459f78ebd97a: Status 404 returned error can't find the container with id d78cdaf352819c36c696ed49e8f9fe70017bf76c245c0a718d7a459f78ebd97a
	Nov 08 10:36:15 embed-certs-790346 kubelet[777]: W1108 10:36:15.596125     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/crio-ac70989979f8b61f80ef5a076ea7c4ae76e220ebcd1305e6fa97fadf8b241f22 WatchSource:0}: Error finding container ac70989979f8b61f80ef5a076ea7c4ae76e220ebcd1305e6fa97fadf8b241f22: Status 404 returned error can't find the container with id ac70989979f8b61f80ef5a076ea7c4ae76e220ebcd1305e6fa97fadf8b241f22
	Nov 08 10:36:21 embed-certs-790346 kubelet[777]: I1108 10:36:21.093345     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 10:36:23 embed-certs-790346 kubelet[777]: I1108 10:36:23.967759     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xxk4p" podStartSLOduration=1.447146415 podStartE2EDuration="8.966298524s" podCreationTimestamp="2025-11-08 10:36:15 +0000 UTC" firstStartedPulling="2025-11-08 10:36:15.583440935 +0000 UTC m=+10.139382393" lastFinishedPulling="2025-11-08 10:36:23.102593044 +0000 UTC m=+17.658534502" observedRunningTime="2025-11-08 10:36:23.894709541 +0000 UTC m=+18.450651048" watchObservedRunningTime="2025-11-08 10:36:23.966298524 +0000 UTC m=+18.522239990"
	Nov 08 10:36:30 embed-certs-790346 kubelet[777]: I1108 10:36:30.891209     777 scope.go:117] "RemoveContainer" containerID="30f5ccacfd1ec04945cb0496a2a6e8a099e5a226edae73a7e085441fd8c49e93"
	Nov 08 10:36:31 embed-certs-790346 kubelet[777]: I1108 10:36:31.897306     777 scope.go:117] "RemoveContainer" containerID="1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927"
	Nov 08 10:36:31 embed-certs-790346 kubelet[777]: E1108 10:36:31.897471     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:36:31 embed-certs-790346 kubelet[777]: I1108 10:36:31.898519     777 scope.go:117] "RemoveContainer" containerID="30f5ccacfd1ec04945cb0496a2a6e8a099e5a226edae73a7e085441fd8c49e93"
	Nov 08 10:36:32 embed-certs-790346 kubelet[777]: I1108 10:36:32.901184     777 scope.go:117] "RemoveContainer" containerID="1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927"
	Nov 08 10:36:32 embed-certs-790346 kubelet[777]: E1108 10:36:32.901354     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:36:35 embed-certs-790346 kubelet[777]: I1108 10:36:35.550512     777 scope.go:117] "RemoveContainer" containerID="1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927"
	Nov 08 10:36:35 embed-certs-790346 kubelet[777]: E1108 10:36:35.550707     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:36:42 embed-certs-790346 kubelet[777]: I1108 10:36:42.929153     777 scope.go:117] "RemoveContainer" containerID="99642b383fc0dec33ad2ce8f0c4a4ffe1b697e862a2493a131dd3ed36626da5e"
	Nov 08 10:36:50 embed-certs-790346 kubelet[777]: I1108 10:36:50.667800     777 scope.go:117] "RemoveContainer" containerID="1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927"
	Nov 08 10:36:50 embed-certs-790346 kubelet[777]: I1108 10:36:50.951417     777 scope.go:117] "RemoveContainer" containerID="1ec116f483974677ebc00d53fcb4ee58a2de4418ac4d3712613967180abac927"
	Nov 08 10:36:50 embed-certs-790346 kubelet[777]: I1108 10:36:50.951770     777 scope.go:117] "RemoveContainer" containerID="26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512"
	Nov 08 10:36:50 embed-certs-790346 kubelet[777]: E1108 10:36:50.951943     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:36:55 embed-certs-790346 kubelet[777]: I1108 10:36:55.550424     777 scope.go:117] "RemoveContainer" containerID="26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512"
	Nov 08 10:36:55 embed-certs-790346 kubelet[777]: E1108 10:36:55.551246     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:37:05 embed-certs-790346 kubelet[777]: E1108 10:37:05.924256     777 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/c42811f4804939850c11bd3397a7f367e5bbaffbc76f29388283f735d9b2a5f7/crio-36e95dd28b9882cf357f3f321d33c57a00d7b9bd20e736dd292293b2307f40a9\": RecentStats: unable to find data in memory cache]"
	Nov 08 10:37:06 embed-certs-790346 kubelet[777]: I1108 10:37:06.667024     777 scope.go:117] "RemoveContainer" containerID="26ff283aa397627cc2fcca681d61aacfcdeb5ba209f1bac105cbefddcb82a512"
	Nov 08 10:37:06 embed-certs-790346 kubelet[777]: E1108 10:37:06.667229     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2bfj6_kubernetes-dashboard(6e9d7f1b-2450-4d9d-81fa-840874a8cd20)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2bfj6" podUID="6e9d7f1b-2450-4d9d-81fa-840874a8cd20"
	Nov 08 10:37:06 embed-certs-790346 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:37:06 embed-certs-790346 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:37:06 embed-certs-790346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [18faf513aa0bb244af506a701456ddf2f5242f9fb0fceca3afc5ff31f5ff8f5e] <==
	2025/11/08 10:36:23 Using namespace: kubernetes-dashboard
	2025/11/08 10:36:23 Using in-cluster config to connect to apiserver
	2025/11/08 10:36:23 Using secret token for csrf signing
	2025/11/08 10:36:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:36:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:36:23 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:36:23 Generating JWE encryption key
	2025/11/08 10:36:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:36:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:36:24 Initializing JWE encryption key from synchronized object
	2025/11/08 10:36:24 Creating in-cluster Sidecar client
	2025/11/08 10:36:24 Serving insecurely on HTTP port: 9090
	2025/11/08 10:36:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:36:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:36:23 Starting overwatch
	
	
	==> storage-provisioner [99642b383fc0dec33ad2ce8f0c4a4ffe1b697e862a2493a131dd3ed36626da5e] <==
	I1108 10:36:12.365624       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:36:42.382436       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [dd8fae91fcb9db2903395cab846cc54cb65d2787b94c2a4c392f8fced9aedf0d] <==
	W1108 10:36:43.012904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:46.468049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:50.765793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:54.377268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:36:57.431601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:00.455428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:00.462335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:37:00.462629       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:37:00.465687       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-790346_3fef511d-0992-4c37-933f-ffd4f2753bd5!
	I1108 10:37:00.475137       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58b9af2b-5b91-43b5-9be5-4a96191976d2", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-790346_3fef511d-0992-4c37-933f-ffd4f2753bd5 became leader
	W1108 10:37:00.484725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:00.491181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:37:00.566248       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-790346_3fef511d-0992-4c37-933f-ffd4f2753bd5!
	W1108 10:37:02.494639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:02.501724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:04.505610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:04.512720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:06.525338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:06.538547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:08.542495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:08.552838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:10.564516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:10.579073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:12.584571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:12.591814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790346 -n embed-certs-790346
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790346 -n embed-certs-790346: exit status 2 (528.135769ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-790346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (9.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-291044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-291044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (307.515837ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:37:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-291044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-291044 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-291044 describe deploy/metrics-server -n kube-system: exit status 1 (82.675545ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-291044 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-291044
helpers_test.go:243: (dbg) docker inspect no-preload-291044:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a",
	        "Created": "2025-11-08T10:36:27.945864714Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1226537,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:36:28.19718477Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/hostname",
	        "HostsPath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/hosts",
	        "LogPath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a-json.log",
	        "Name": "/no-preload-291044",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-291044:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-291044",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a",
	                "LowerDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-291044",
	                "Source": "/var/lib/docker/volumes/no-preload-291044/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-291044",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-291044",
	                "name.minikube.sigs.k8s.io": "no-preload-291044",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ad2f84d8594394d339a1ff1d950cab8e81b4a790cd89cbfc3f99aec1fc2d3c4",
	            "SandboxKey": "/var/run/docker/netns/9ad2f84d8594",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34537"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34538"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34541"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34539"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34540"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-291044": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:f3:68:cf:47:3d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "15d9ca830af40cf01657fa03afa3cf3bcbb4c14b9a6b5c8dfc90bca89de4ebc4",
	                    "EndpointID": "0b23a2ab4a47aa350d54e93097d4b698dd736ed2e6c7a8754cb8434bc532959f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-291044",
	                        "4dafcc75ae9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-291044 -n no-preload-291044
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-291044 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-291044 logs -n 25: (1.549786814s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-171136                                                                                                                                                                                                                     │ old-k8s-version-171136       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-837698                                                                                                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-236075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-236075 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-236075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-790346 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-790346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ image   │ default-k8s-diff-port-236075 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ pause   │ -p default-k8s-diff-port-236075 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-553553                                                                                                                                                                                                               │ disable-driver-mounts-553553 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:37 UTC │
	│ image   │ embed-certs-790346 image list --format=json                                                                                                                                                                                                   │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-790346 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-291044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:37:18
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:37:18.392853 1230576 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:37:18.393080 1230576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:37:18.393111 1230576 out.go:374] Setting ErrFile to fd 2...
	I1108 10:37:18.393132 1230576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:37:18.393427 1230576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:37:18.393888 1230576 out.go:368] Setting JSON to false
	I1108 10:37:18.394917 1230576 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33584,"bootTime":1762564655,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:37:18.395014 1230576 start.go:143] virtualization:  
	I1108 10:37:18.398731 1230576 out.go:179] * [newest-cni-515571] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:37:18.401965 1230576 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:37:18.402038 1230576 notify.go:221] Checking for updates...
	I1108 10:37:18.407869 1230576 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:37:18.410952 1230576 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:37:18.413924 1230576 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:37:18.416906 1230576 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:37:18.419996 1230576 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:37:16.113971 1226201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:16.613578 1226201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:17.113641 1226201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:17.612993 1226201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:18.112987 1226201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:18.569310 1226201 kubeadm.go:1114] duration metric: took 4.397893957s to wait for elevateKubeSystemPrivileges
	I1108 10:37:18.569350 1226201 kubeadm.go:403] duration metric: took 26.002325792s to StartCluster
	I1108 10:37:18.569371 1226201 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:18.569453 1226201 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:37:18.570185 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:18.570413 1226201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:37:18.570431 1226201 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:37:18.570983 1226201 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:37:18.571138 1226201 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:37:18.571231 1226201 addons.go:70] Setting storage-provisioner=true in profile "no-preload-291044"
	I1108 10:37:18.571247 1226201 addons.go:239] Setting addon storage-provisioner=true in "no-preload-291044"
	I1108 10:37:18.571269 1226201 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:37:18.571733 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:37:18.571844 1226201 addons.go:70] Setting default-storageclass=true in profile "no-preload-291044"
	I1108 10:37:18.571866 1226201 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-291044"
	I1108 10:37:18.572189 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:37:18.574561 1226201 out.go:179] * Verifying Kubernetes components...
	I1108 10:37:18.422581 1230576 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:37:18.422738 1230576 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:37:18.486460 1230576 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:37:18.486583 1230576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:37:18.627791 1230576 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:37:18.613237435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:37:18.627888 1230576 docker.go:319] overlay module found
	I1108 10:37:18.632679 1230576 out.go:179] * Using the docker driver based on user configuration
	I1108 10:37:18.584077 1226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:37:18.639950 1226201 addons.go:239] Setting addon default-storageclass=true in "no-preload-291044"
	I1108 10:37:18.639991 1226201 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:37:18.644461 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:37:18.662142 1226201 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:37:18.635524 1230576 start.go:309] selected driver: docker
	I1108 10:37:18.635550 1230576 start.go:930] validating driver "docker" against <nil>
	I1108 10:37:18.635565 1230576 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:37:18.636239 1230576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:37:18.881807 1230576 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:37:18.865224604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:37:18.882003 1230576 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1108 10:37:18.882039 1230576 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1108 10:37:18.882305 1230576 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 10:37:18.888329 1230576 out.go:179] * Using Docker driver with root privileges
	I1108 10:37:18.891174 1230576 cni.go:84] Creating CNI manager for ""
	I1108 10:37:18.891243 1230576 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:37:18.891253 1230576 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:37:18.891338 1230576 start.go:353] cluster config:
	{Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:37:18.894510 1230576 out.go:179] * Starting "newest-cni-515571" primary control-plane node in "newest-cni-515571" cluster
	I1108 10:37:18.898297 1230576 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:37:18.901205 1230576 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:37:18.904191 1230576 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:37:18.904246 1230576 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:37:18.904255 1230576 cache.go:59] Caching tarball of preloaded images
	I1108 10:37:18.904336 1230576 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:37:18.904346 1230576 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:37:18.904475 1230576 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/config.json ...
	I1108 10:37:18.904498 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/config.json: {Name:mk6f54aa92d97a630c1f7d11a4fc88c252cc90db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:18.904679 1230576 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:37:18.931027 1230576 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:37:18.931051 1230576 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:37:18.931064 1230576 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:37:18.931091 1230576 start.go:360] acquireMachinesLock for newest-cni-515571: {Name:mk1ef8d84bc10dec36e1c08ff277aaf3c1e26a13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:37:18.931191 1230576 start.go:364] duration metric: took 85.208µs to acquireMachinesLock for "newest-cni-515571"
	I1108 10:37:18.931216 1230576 start.go:93] Provisioning new machine with config: &{Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:37:18.931287 1230576 start.go:125] createHost starting for "" (driver="docker")
	I1108 10:37:18.665304 1226201 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:37:18.665326 1226201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:37:18.665395 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:37:18.729269 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:37:18.743402 1226201 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:37:18.743425 1226201 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:37:18.743503 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:37:18.842090 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:37:19.155710 1226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:37:19.287263 1226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:37:19.346896 1226201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:37:19.347009 1226201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:37:20.895063 1226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.739318845s)
	I1108 10:37:20.895110 1226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.607828935s)
	I1108 10:37:20.895400 1226201 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.54837254s)
	I1108 10:37:20.896085 1226201 node_ready.go:35] waiting up to 6m0s for node "no-preload-291044" to be "Ready" ...
	I1108 10:37:20.896303 1226201 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.549377113s)
	I1108 10:37:20.896319 1226201 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 10:37:20.995689 1226201 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 10:37:18.934911 1230576 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:37:18.935162 1230576 start.go:159] libmachine.API.Create for "newest-cni-515571" (driver="docker")
	I1108 10:37:18.935199 1230576 client.go:173] LocalClient.Create starting
	I1108 10:37:18.935258 1230576 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem
	I1108 10:37:18.935289 1230576 main.go:143] libmachine: Decoding PEM data...
	I1108 10:37:18.935303 1230576 main.go:143] libmachine: Parsing certificate...
	I1108 10:37:18.935358 1230576 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem
	I1108 10:37:18.935375 1230576 main.go:143] libmachine: Decoding PEM data...
	I1108 10:37:18.935385 1230576 main.go:143] libmachine: Parsing certificate...
	I1108 10:37:18.935730 1230576 cli_runner.go:164] Run: docker network inspect newest-cni-515571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:37:18.965407 1230576 cli_runner.go:211] docker network inspect newest-cni-515571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:37:18.965483 1230576 network_create.go:284] running [docker network inspect newest-cni-515571] to gather additional debugging logs...
	I1108 10:37:18.965500 1230576 cli_runner.go:164] Run: docker network inspect newest-cni-515571
	W1108 10:37:18.997652 1230576 cli_runner.go:211] docker network inspect newest-cni-515571 returned with exit code 1
	I1108 10:37:18.997678 1230576 network_create.go:287] error running [docker network inspect newest-cni-515571]: docker network inspect newest-cni-515571: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-515571 not found
	I1108 10:37:18.997692 1230576 network_create.go:289] output of [docker network inspect newest-cni-515571]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-515571 not found
	
	** /stderr **
	I1108 10:37:18.997803 1230576 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:37:19.035441 1230576 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f127b1978c3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:c7:37:65:8c:96} reservation:<nil>}
	I1108 10:37:19.035748 1230576 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b98bf73d2e94 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:99:be:46:ea:86} reservation:<nil>}
	I1108 10:37:19.036054 1230576 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c4df73992be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:ad:c1:c0:ea:6d} reservation:<nil>}
	I1108 10:37:19.036512 1230576 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a25150}
	I1108 10:37:19.036532 1230576 network_create.go:124] attempt to create docker network newest-cni-515571 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 10:37:19.036588 1230576 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-515571 newest-cni-515571
	I1108 10:37:19.138125 1230576 network_create.go:108] docker network newest-cni-515571 192.168.76.0/24 created
	I1108 10:37:19.138161 1230576 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-515571" container
	I1108 10:37:19.138241 1230576 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:37:19.172732 1230576 cli_runner.go:164] Run: docker volume create newest-cni-515571 --label name.minikube.sigs.k8s.io=newest-cni-515571 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:37:19.200520 1230576 oci.go:103] Successfully created a docker volume newest-cni-515571
	I1108 10:37:19.200595 1230576 cli_runner.go:164] Run: docker run --rm --name newest-cni-515571-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-515571 --entrypoint /usr/bin/test -v newest-cni-515571:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:37:19.952987 1230576 oci.go:107] Successfully prepared a docker volume newest-cni-515571
	I1108 10:37:19.953028 1230576 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:37:19.953047 1230576 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:37:19.953107 1230576 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-515571:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 10:37:20.998836 1226201 addons.go:515] duration metric: took 2.427679537s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 10:37:21.402506 1226201 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-291044" context rescaled to 1 replicas
	W1108 10:37:22.899541 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	W1108 10:37:25.401220 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	I1108 10:37:25.214432 1230576 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-515571:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (5.261280504s)
	I1108 10:37:25.214465 1230576 kic.go:203] duration metric: took 5.261414884s to extract preloaded images to volume ...
	W1108 10:37:25.214615 1230576 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:37:25.214721 1230576 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:37:25.267818 1230576 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-515571 --name newest-cni-515571 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-515571 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-515571 --network newest-cni-515571 --ip 192.168.76.2 --volume newest-cni-515571:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:37:25.600511 1230576 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Running}}
	I1108 10:37:25.623244 1230576 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:37:25.647547 1230576 cli_runner.go:164] Run: docker exec newest-cni-515571 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:37:25.696626 1230576 oci.go:144] the created container "newest-cni-515571" has a running status.
	I1108 10:37:25.696661 1230576 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa...
	I1108 10:37:26.214314 1230576 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:37:26.238779 1230576 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:37:26.264916 1230576 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:37:26.264936 1230576 kic_runner.go:114] Args: [docker exec --privileged newest-cni-515571 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:37:26.322400 1230576 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:37:26.343415 1230576 machine.go:94] provisionDockerMachine start ...
	I1108 10:37:26.343501 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:26.364502 1230576 main.go:143] libmachine: Using SSH client type: native
	I1108 10:37:26.364931 1230576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1108 10:37:26.364945 1230576 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:37:26.564131 1230576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-515571
	
	I1108 10:37:26.564196 1230576 ubuntu.go:182] provisioning hostname "newest-cni-515571"
	I1108 10:37:26.564292 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:26.584247 1230576 main.go:143] libmachine: Using SSH client type: native
	I1108 10:37:26.584937 1230576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1108 10:37:26.584957 1230576 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-515571 && echo "newest-cni-515571" | sudo tee /etc/hostname
	I1108 10:37:26.759236 1230576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-515571
	
	I1108 10:37:26.759391 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:26.784910 1230576 main.go:143] libmachine: Using SSH client type: native
	I1108 10:37:26.785210 1230576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1108 10:37:26.785233 1230576 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-515571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-515571/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-515571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:37:26.948474 1230576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:37:26.948503 1230576 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:37:26.948545 1230576 ubuntu.go:190] setting up certificates
	I1108 10:37:26.948561 1230576 provision.go:84] configureAuth start
	I1108 10:37:26.948646 1230576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:37:26.965740 1230576 provision.go:143] copyHostCerts
	I1108 10:37:26.965808 1230576 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:37:26.965820 1230576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:37:26.965900 1230576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:37:26.965999 1230576 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:37:26.966011 1230576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:37:26.966040 1230576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:37:26.966158 1230576 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:37:26.966175 1230576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:37:26.966206 1230576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:37:26.966265 1230576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.newest-cni-515571 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-515571]
	I1108 10:37:27.284812 1230576 provision.go:177] copyRemoteCerts
	I1108 10:37:27.284889 1230576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:37:27.284935 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:27.303896 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:27.408482 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:37:27.426288 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:37:27.445604 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:37:27.467255 1230576 provision.go:87] duration metric: took 518.654501ms to configureAuth
	I1108 10:37:27.467323 1230576 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:37:27.467535 1230576 config.go:182] Loaded profile config "newest-cni-515571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:37:27.467679 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:27.486704 1230576 main.go:143] libmachine: Using SSH client type: native
	I1108 10:37:27.487015 1230576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1108 10:37:27.487029 1230576 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:37:27.778062 1230576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:37:27.778107 1230576 machine.go:97] duration metric: took 1.434655405s to provisionDockerMachine
	I1108 10:37:27.778118 1230576 client.go:176] duration metric: took 8.842913372s to LocalClient.Create
	I1108 10:37:27.778143 1230576 start.go:167] duration metric: took 8.842983023s to libmachine.API.Create "newest-cni-515571"
	I1108 10:37:27.778155 1230576 start.go:293] postStartSetup for "newest-cni-515571" (driver="docker")
	I1108 10:37:27.778165 1230576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:37:27.778229 1230576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:37:27.778276 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:27.794708 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:27.902360 1230576 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:37:27.905819 1230576 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:37:27.905851 1230576 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:37:27.905862 1230576 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:37:27.905919 1230576 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:37:27.906002 1230576 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:37:27.906113 1230576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:37:27.913412 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:37:27.930774 1230576 start.go:296] duration metric: took 152.604091ms for postStartSetup
	I1108 10:37:27.931145 1230576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:37:27.950350 1230576 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/config.json ...
	I1108 10:37:27.950626 1230576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:37:27.950683 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:27.967816 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:28.074537 1230576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:37:28.079965 1230576 start.go:128] duration metric: took 9.148662577s to createHost
	I1108 10:37:28.079990 1230576 start.go:83] releasing machines lock for "newest-cni-515571", held for 9.148790204s
	I1108 10:37:28.080068 1230576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:37:28.099256 1230576 ssh_runner.go:195] Run: cat /version.json
	I1108 10:37:28.099284 1230576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:37:28.099310 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:28.099356 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:28.123345 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:28.128758 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:28.228094 1230576 ssh_runner.go:195] Run: systemctl --version
	I1108 10:37:28.320989 1230576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:37:28.361334 1230576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:37:28.365962 1230576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:37:28.366038 1230576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:37:28.394683 1230576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:37:28.394708 1230576 start.go:496] detecting cgroup driver to use...
	I1108 10:37:28.394740 1230576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:37:28.394793 1230576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:37:28.412908 1230576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:37:28.426004 1230576 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:37:28.426097 1230576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:37:28.443870 1230576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:37:28.468712 1230576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:37:28.586792 1230576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:37:28.721137 1230576 docker.go:234] disabling docker service ...
	I1108 10:37:28.721274 1230576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:37:28.743459 1230576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:37:28.756318 1230576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:37:28.872135 1230576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:37:29.002976 1230576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:37:29.018233 1230576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:37:29.035122 1230576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:37:29.035232 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.044937 1230576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:37:29.045031 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.055122 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.065483 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.075695 1230576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:37:29.084145 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.093791 1230576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.107749 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.117281 1230576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:37:29.128807 1230576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:37:29.136823 1230576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:37:29.257248 1230576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:37:29.384706 1230576 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:37:29.384809 1230576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:37:29.388862 1230576 start.go:564] Will wait 60s for crictl version
	I1108 10:37:29.388952 1230576 ssh_runner.go:195] Run: which crictl
	I1108 10:37:29.392588 1230576 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:37:29.419677 1230576 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:37:29.419790 1230576 ssh_runner.go:195] Run: crio --version
	I1108 10:37:29.452988 1230576 ssh_runner.go:195] Run: crio --version
	I1108 10:37:29.490171 1230576 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:37:29.492905 1230576 cli_runner.go:164] Run: docker network inspect newest-cni-515571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:37:29.509482 1230576 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:37:29.512967 1230576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:37:29.525857 1230576 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1108 10:37:27.899412 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	W1108 10:37:30.399371 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	I1108 10:37:29.528735 1230576 kubeadm.go:884] updating cluster {Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:37:29.528862 1230576 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:37:29.528955 1230576 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:37:29.560178 1230576 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:37:29.560198 1230576 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:37:29.560253 1230576 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:37:29.584872 1230576 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:37:29.584895 1230576 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:37:29.584903 1230576 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:37:29.585012 1230576 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-515571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:37:29.585101 1230576 ssh_runner.go:195] Run: crio config
	I1108 10:37:29.646212 1230576 cni.go:84] Creating CNI manager for ""
	I1108 10:37:29.646232 1230576 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:37:29.646253 1230576 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 10:37:29.646279 1230576 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-515571 NodeName:newest-cni-515571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:37:29.646418 1230576 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-515571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:37:29.646500 1230576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:37:29.654084 1230576 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:37:29.654207 1230576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:37:29.662144 1230576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 10:37:29.674858 1230576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:37:29.688308 1230576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1108 10:37:29.701709 1230576 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:37:29.705477 1230576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:37:29.714839 1230576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:37:29.828136 1230576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:37:29.843378 1230576 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571 for IP: 192.168.76.2
	I1108 10:37:29.843399 1230576 certs.go:195] generating shared ca certs ...
	I1108 10:37:29.843416 1230576 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:29.843562 1230576 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:37:29.843609 1230576 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:37:29.843619 1230576 certs.go:257] generating profile certs ...
	I1108 10:37:29.843674 1230576 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.key
	I1108 10:37:29.843691 1230576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.crt with IP's: []
	I1108 10:37:30.305797 1230576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.crt ...
	I1108 10:37:30.305830 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.crt: {Name:mk06ce47763c8d097a4e58e433564ff92524f3e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:30.306064 1230576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.key ...
	I1108 10:37:30.306081 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.key: {Name:mkc9ade5bf32819b647e9e1b1ffb1b7497d9c208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:30.306188 1230576 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key.0dbe4724
	I1108 10:37:30.306208 1230576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt.0dbe4724 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 10:37:30.965292 1230576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt.0dbe4724 ...
	I1108 10:37:30.965332 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt.0dbe4724: {Name:mkec282190793062f5c7282b363c5e9e32bdda76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:30.965551 1230576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key.0dbe4724 ...
	I1108 10:37:30.965577 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key.0dbe4724: {Name:mkd31fec51db04bc2294bf2ccfc9b9fab07e2fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:30.965662 1230576 certs.go:382] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt.0dbe4724 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt
	I1108 10:37:30.965751 1230576 certs.go:386] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key.0dbe4724 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key
	I1108 10:37:30.965820 1230576 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key
	I1108 10:37:30.965842 1230576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.crt with IP's: []
	I1108 10:37:31.308171 1230576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.crt ...
	I1108 10:37:31.308200 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.crt: {Name:mkee8e1f5b7c9323087787065c2706248c72ac63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:31.308394 1230576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key ...
	I1108 10:37:31.308410 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key: {Name:mk8e858113e3ac42afcb7ef83cff271240a882a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:31.308618 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:37:31.308668 1230576 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:37:31.308683 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:37:31.308713 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:37:31.308741 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:37:31.308767 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:37:31.308816 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:37:31.309382 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:37:31.327953 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:37:31.352239 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:37:31.372340 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:37:31.392586 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 10:37:31.414462 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:37:31.433312 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:37:31.460820 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:37:31.479248 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:37:31.503393 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:37:31.522457 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:37:31.539786 1230576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:37:31.552400 1230576 ssh_runner.go:195] Run: openssl version
	I1108 10:37:31.558898 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:37:31.567140 1230576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:37:31.570658 1230576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:37:31.570736 1230576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:37:31.611617 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:37:31.619892 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:37:31.627778 1230576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:37:31.631200 1230576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:37:31.631263 1230576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:37:31.672064 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:37:31.680335 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:37:31.688413 1230576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:37:31.692219 1230576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:37:31.692312 1230576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:37:31.733018 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:37:31.741367 1230576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:37:31.745099 1230576 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:37:31.745170 1230576 kubeadm.go:401] StartCluster: {Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:37:31.745262 1230576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:37:31.745329 1230576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:37:31.770511 1230576 cri.go:89] found id: ""
	I1108 10:37:31.770635 1230576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:37:31.778493 1230576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:37:31.786279 1230576 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:37:31.786395 1230576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:37:31.794084 1230576 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:37:31.794105 1230576 kubeadm.go:158] found existing configuration files:
	
	I1108 10:37:31.794187 1230576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:37:31.801711 1230576 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:37:31.801793 1230576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:37:31.809094 1230576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:37:31.816645 1230576 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:37:31.816746 1230576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:37:31.824108 1230576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:37:31.831781 1230576 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:37:31.831881 1230576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:37:31.839289 1230576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:37:31.846835 1230576 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:37:31.846923 1230576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:37:31.854526 1230576 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:37:31.895498 1230576 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:37:31.895766 1230576 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:37:31.927935 1230576 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:37:31.928041 1230576 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:37:31.928106 1230576 kubeadm.go:319] OS: Linux
	I1108 10:37:31.928192 1230576 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:37:31.928295 1230576 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:37:31.928397 1230576 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:37:31.928553 1230576 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:37:31.928655 1230576 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:37:31.928749 1230576 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:37:31.928829 1230576 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:37:31.928933 1230576 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:37:31.929025 1230576 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:37:31.997449 1230576 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:37:31.997618 1230576 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:37:31.997782 1230576 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:37:32.011986 1230576 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 10:37:32.017952 1230576 out.go:252]   - Generating certificates and keys ...
	I1108 10:37:32.018077 1230576 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:37:32.018161 1230576 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:37:32.891045 1230576 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:37:33.114724 1230576 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1108 10:37:32.400366 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	W1108 10:37:34.404314 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	I1108 10:37:34.899908 1226201 node_ready.go:49] node "no-preload-291044" is "Ready"
	I1108 10:37:34.899933 1226201 node_ready.go:38] duration metric: took 14.00383252s for node "no-preload-291044" to be "Ready" ...
	I1108 10:37:34.899948 1226201 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:37:34.900005 1226201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:37:34.913763 1226201 api_server.go:72] duration metric: took 16.343302509s to wait for apiserver process to appear ...
	I1108 10:37:34.913784 1226201 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:37:34.913803 1226201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:37:34.925303 1226201 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:37:34.928599 1226201 api_server.go:141] control plane version: v1.34.1
	I1108 10:37:34.928631 1226201 api_server.go:131] duration metric: took 14.840232ms to wait for apiserver health ...
	I1108 10:37:34.928641 1226201 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:37:34.937317 1226201 system_pods.go:59] 8 kube-system pods found
	I1108 10:37:34.937397 1226201 system_pods.go:61] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:37:34.937419 1226201 system_pods.go:61] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running
	I1108 10:37:34.937460 1226201 system_pods.go:61] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:37:34.937487 1226201 system_pods.go:61] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running
	I1108 10:37:34.937513 1226201 system_pods.go:61] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running
	I1108 10:37:34.937547 1226201 system_pods.go:61] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:37:34.937570 1226201 system_pods.go:61] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running
	I1108 10:37:34.937596 1226201 system_pods.go:61] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:37:34.937631 1226201 system_pods.go:74] duration metric: took 8.98324ms to wait for pod list to return data ...
	I1108 10:37:34.937657 1226201 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:37:34.944079 1226201 default_sa.go:45] found service account: "default"
	I1108 10:37:34.944151 1226201 default_sa.go:55] duration metric: took 6.474277ms for default service account to be created ...
	I1108 10:37:34.944175 1226201 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:37:34.947653 1226201 system_pods.go:86] 8 kube-system pods found
	I1108 10:37:34.947727 1226201 system_pods.go:89] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:37:34.947748 1226201 system_pods.go:89] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running
	I1108 10:37:34.947774 1226201 system_pods.go:89] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:37:34.947807 1226201 system_pods.go:89] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running
	I1108 10:37:34.947831 1226201 system_pods.go:89] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running
	I1108 10:37:34.947852 1226201 system_pods.go:89] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:37:34.947890 1226201 system_pods.go:89] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running
	I1108 10:37:34.947916 1226201 system_pods.go:89] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:37:34.947964 1226201 retry.go:31] will retry after 241.025595ms: missing components: kube-dns
	I1108 10:37:35.193413 1226201 system_pods.go:86] 8 kube-system pods found
	I1108 10:37:35.193495 1226201 system_pods.go:89] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:37:35.193665 1226201 system_pods.go:89] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running
	I1108 10:37:35.193695 1226201 system_pods.go:89] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:37:35.193718 1226201 system_pods.go:89] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running
	I1108 10:37:35.193754 1226201 system_pods.go:89] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running
	I1108 10:37:35.193777 1226201 system_pods.go:89] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:37:35.193797 1226201 system_pods.go:89] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running
	I1108 10:37:35.193837 1226201 system_pods.go:89] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:37:35.193871 1226201 retry.go:31] will retry after 303.703093ms: missing components: kube-dns
	I1108 10:37:35.503661 1226201 system_pods.go:86] 8 kube-system pods found
	I1108 10:37:35.503806 1226201 system_pods.go:89] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:37:35.503860 1226201 system_pods.go:89] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running
	I1108 10:37:35.503899 1226201 system_pods.go:89] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:37:35.503951 1226201 system_pods.go:89] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running
	I1108 10:37:35.503977 1226201 system_pods.go:89] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running
	I1108 10:37:35.504034 1226201 system_pods.go:89] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:37:35.504058 1226201 system_pods.go:89] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running
	I1108 10:37:35.504083 1226201 system_pods.go:89] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:37:35.504151 1226201 retry.go:31] will retry after 396.709987ms: missing components: kube-dns
	I1108 10:37:35.906572 1226201 system_pods.go:86] 8 kube-system pods found
	I1108 10:37:35.906654 1226201 system_pods.go:89] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Running
	I1108 10:37:35.906676 1226201 system_pods.go:89] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running
	I1108 10:37:35.906701 1226201 system_pods.go:89] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:37:35.906736 1226201 system_pods.go:89] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running
	I1108 10:37:35.906761 1226201 system_pods.go:89] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running
	I1108 10:37:35.906783 1226201 system_pods.go:89] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:37:35.906819 1226201 system_pods.go:89] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running
	I1108 10:37:35.906843 1226201 system_pods.go:89] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Running
	I1108 10:37:35.906869 1226201 system_pods.go:126] duration metric: took 962.672956ms to wait for k8s-apps to be running ...
	I1108 10:37:35.906906 1226201 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:37:35.906999 1226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:37:35.924472 1226201 system_svc.go:56] duration metric: took 17.524427ms WaitForService to wait for kubelet
	I1108 10:37:35.924549 1226201 kubeadm.go:587] duration metric: took 17.354081514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:37:35.924581 1226201 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:37:35.928014 1226201 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:37:35.928093 1226201 node_conditions.go:123] node cpu capacity is 2
	I1108 10:37:35.928130 1226201 node_conditions.go:105] duration metric: took 3.51398ms to run NodePressure ...
	I1108 10:37:35.928176 1226201 start.go:242] waiting for startup goroutines ...
	I1108 10:37:35.928205 1226201 start.go:247] waiting for cluster config update ...
	I1108 10:37:35.928235 1226201 start.go:256] writing updated cluster config ...
	I1108 10:37:35.928595 1226201 ssh_runner.go:195] Run: rm -f paused
	I1108 10:37:35.933378 1226201 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:37:35.937156 1226201 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nvtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.943160 1226201 pod_ready.go:94] pod "coredns-66bc5c9577-nvtlg" is "Ready"
	I1108 10:37:35.943183 1226201 pod_ready.go:86] duration metric: took 5.958846ms for pod "coredns-66bc5c9577-nvtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.946277 1226201 pod_ready.go:83] waiting for pod "etcd-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.951906 1226201 pod_ready.go:94] pod "etcd-no-preload-291044" is "Ready"
	I1108 10:37:35.951928 1226201 pod_ready.go:86] duration metric: took 5.630438ms for pod "etcd-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.954737 1226201 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.960329 1226201 pod_ready.go:94] pod "kube-apiserver-no-preload-291044" is "Ready"
	I1108 10:37:35.960402 1226201 pod_ready.go:86] duration metric: took 5.599727ms for pod "kube-apiserver-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.963176 1226201 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:36.338552 1226201 pod_ready.go:94] pod "kube-controller-manager-no-preload-291044" is "Ready"
	I1108 10:37:36.338650 1226201 pod_ready.go:86] duration metric: took 375.41436ms for pod "kube-controller-manager-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:36.538024 1226201 pod_ready.go:83] waiting for pod "kube-proxy-2m8tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:36.938570 1226201 pod_ready.go:94] pod "kube-proxy-2m8tx" is "Ready"
	I1108 10:37:36.938602 1226201 pod_ready.go:86] duration metric: took 400.497964ms for pod "kube-proxy-2m8tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:37.138419 1226201 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:37.538233 1226201 pod_ready.go:94] pod "kube-scheduler-no-preload-291044" is "Ready"
	I1108 10:37:37.538266 1226201 pod_ready.go:86] duration metric: took 399.805637ms for pod "kube-scheduler-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:37.538280 1226201 pod_ready.go:40] duration metric: took 1.604837138s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:37:37.623398 1226201 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:37:37.626969 1226201 out.go:179] * Done! kubectl is now configured to use "no-preload-291044" cluster and "default" namespace by default
	I1108 10:37:34.404460 1230576 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:37:35.732035 1230576 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:37:35.971238 1230576 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:37:35.971584 1230576 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-515571] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:37:36.166246 1230576 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:37:36.166643 1230576 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-515571] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:37:36.678676 1230576 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:37:36.975730 1230576 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:37:37.502450 1230576 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:37:37.502871 1230576 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:37:37.687443 1230576 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:37:38.009115 1230576 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:37:38.114390 1230576 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:37:38.435967 1230576 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:37:38.887824 1230576 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:37:38.888801 1230576 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:37:38.893873 1230576 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:37:38.897309 1230576 out.go:252]   - Booting up control plane ...
	I1108 10:37:38.897418 1230576 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:37:38.903581 1230576 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:37:38.903664 1230576 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:37:38.916897 1230576 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:37:38.917018 1230576 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:37:38.924256 1230576 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:37:38.924668 1230576 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:37:38.924719 1230576 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:37:39.074527 1230576 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:37:39.074694 1230576 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:37:41.074861 1230576 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000848472s
	I1108 10:37:41.080836 1230576 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:37:41.080945 1230576 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 10:37:41.081045 1230576 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:37:41.081139 1230576 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 10:37:44.962892 1230576 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.882805188s
	I1108 10:37:46.280549 1230576 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.201168879s
	I1108 10:37:48.082869 1230576 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003289408s
	I1108 10:37:48.111955 1230576 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:37:48.137932 1230576 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:37:48.175243 1230576 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:37:48.175773 1230576 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-515571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:37:48.197124 1230576 kubeadm.go:319] [bootstrap-token] Using token: plem44.4qp9l46repzins7g
	I1108 10:37:48.200074 1230576 out.go:252]   - Configuring RBAC rules ...
	I1108 10:37:48.200198 1230576 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:37:48.215380 1230576 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:37:48.230836 1230576 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:37:48.246209 1230576 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:37:48.251495 1230576 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:37:48.255705 1230576 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	
	
	==> CRI-O <==
	Nov 08 10:37:35 no-preload-291044 crio[837]: time="2025-11-08T10:37:35.068629644Z" level=info msg="Created container 3ce41fa47fc1c1152a370a2726f7d09d0b11d6d39d8dc91c5bbf21a9ec4e2466: kube-system/coredns-66bc5c9577-nvtlg/coredns" id=2d99d9f1-1315-4208-bab1-311b8ee57455 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:37:35 no-preload-291044 crio[837]: time="2025-11-08T10:37:35.06969266Z" level=info msg="Starting container: 3ce41fa47fc1c1152a370a2726f7d09d0b11d6d39d8dc91c5bbf21a9ec4e2466" id=682aae25-bed2-48eb-a618-c679cc0d0e77 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:37:35 no-preload-291044 crio[837]: time="2025-11-08T10:37:35.081816911Z" level=info msg="Started container" PID=2485 containerID=3ce41fa47fc1c1152a370a2726f7d09d0b11d6d39d8dc91c5bbf21a9ec4e2466 description=kube-system/coredns-66bc5c9577-nvtlg/coredns id=682aae25-bed2-48eb-a618-c679cc0d0e77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f31a2c0f45820ddf1815c32fcd7ed85b02a6452ce406f0c6fe25329d42ca6ef0
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.714846277Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8346fc6a-0128-4eff-8d58-c4e00e4ad0d0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.714914615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.729110206Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5a93912a1048c549f67240f02646159db8a7661c51e65d0eaeea8bb2eae86951 UID:19b26969-1d1f-4969-bd57-67043e5a7c30 NetNS:/var/run/netns/f69e64e9-9b83-441e-8330-9a367a61a06a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000b312d0}] Aliases:map[]}"
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.729286061Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.739842367Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5a93912a1048c549f67240f02646159db8a7661c51e65d0eaeea8bb2eae86951 UID:19b26969-1d1f-4969-bd57-67043e5a7c30 NetNS:/var/run/netns/f69e64e9-9b83-441e-8330-9a367a61a06a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000b312d0}] Aliases:map[]}"
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.740134452Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.744529407Z" level=info msg="Ran pod sandbox 5a93912a1048c549f67240f02646159db8a7661c51e65d0eaeea8bb2eae86951 with infra container: default/busybox/POD" id=8346fc6a-0128-4eff-8d58-c4e00e4ad0d0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.745819592Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cb74510d-f7fd-4aaf-82a9-5fec72d750e7 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.746013597Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cb74510d-f7fd-4aaf-82a9-5fec72d750e7 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.746124609Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cb74510d-f7fd-4aaf-82a9-5fec72d750e7 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.749479274Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2a6424f8-9885-4751-ae75-3c95f4846d62 name=/runtime.v1.ImageService/PullImage
	Nov 08 10:37:39 no-preload-291044 crio[837]: time="2025-11-08T10:37:39.753133484Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 10:37:41 no-preload-291044 crio[837]: time="2025-11-08T10:37:41.891394773Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=2a6424f8-9885-4751-ae75-3c95f4846d62 name=/runtime.v1.ImageService/PullImage
	Nov 08 10:37:41 no-preload-291044 crio[837]: time="2025-11-08T10:37:41.892541447Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cbfbb167-ad13-49e7-a52e-d78dfeaaf48e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:37:41 no-preload-291044 crio[837]: time="2025-11-08T10:37:41.897453268Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1d558ff2-0f18-4e21-bcb5-47fa98944192 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:37:41 no-preload-291044 crio[837]: time="2025-11-08T10:37:41.90303679Z" level=info msg="Creating container: default/busybox/busybox" id=b637269d-cc6a-4b91-ab53-ec72f1d10630 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:37:41 no-preload-291044 crio[837]: time="2025-11-08T10:37:41.903332306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:41 no-preload-291044 crio[837]: time="2025-11-08T10:37:41.910858261Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:41 no-preload-291044 crio[837]: time="2025-11-08T10:37:41.911568951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:41 no-preload-291044 crio[837]: time="2025-11-08T10:37:41.931264859Z" level=info msg="Created container 293a2a4987c859354f96c9f37f64b73381bb845cd75bce7aa912aaecdc0aae25: default/busybox/busybox" id=b637269d-cc6a-4b91-ab53-ec72f1d10630 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:37:41 no-preload-291044 crio[837]: time="2025-11-08T10:37:41.934736025Z" level=info msg="Starting container: 293a2a4987c859354f96c9f37f64b73381bb845cd75bce7aa912aaecdc0aae25" id=73d65711-f083-4efd-93dc-4b4a5d1f802b name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:37:41 no-preload-291044 crio[837]: time="2025-11-08T10:37:41.942454114Z" level=info msg="Started container" PID=2544 containerID=293a2a4987c859354f96c9f37f64b73381bb845cd75bce7aa912aaecdc0aae25 description=default/busybox/busybox id=73d65711-f083-4efd-93dc-4b4a5d1f802b name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a93912a1048c549f67240f02646159db8a7661c51e65d0eaeea8bb2eae86951
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	293a2a4987c85       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   5a93912a1048c       busybox                                     default
	3ce41fa47fc1c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   f31a2c0f45820       coredns-66bc5c9577-nvtlg                    kube-system
	542087d579b73       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   a3e4decca6d26       storage-provisioner                         kube-system
	9ec2b4ef15af9       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   2265ebf26713d       kindnet-nct2b                               kube-system
	e4e56b370645b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      31 seconds ago      Running             kube-proxy                0                   a1b1952c8a1b0       kube-proxy-2m8tx                            kube-system
	1eb6a6b2da5a5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      45 seconds ago      Running             kube-scheduler            0                   23adeb820fcfd       kube-scheduler-no-preload-291044            kube-system
	01373a0149770       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      45 seconds ago      Running             kube-controller-manager   0                   84ba3d6a9523e       kube-controller-manager-no-preload-291044   kube-system
	6fb5dedc48b12       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      45 seconds ago      Running             kube-apiserver            0                   3334dcf9897e7       kube-apiserver-no-preload-291044            kube-system
	880af6de3e046       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      45 seconds ago      Running             etcd                      0                   4b51475673c3e       etcd-no-preload-291044                      kube-system
	
	
	==> coredns [3ce41fa47fc1c1152a370a2726f7d09d0b11d6d39d8dc91c5bbf21a9ec4e2466] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47883 - 22088 "HINFO IN 1607003836193005186.4501193212942449906. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021209405s
	
	
	==> describe nodes <==
	Name:               no-preload-291044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-291044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=no-preload-291044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_37_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-291044
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:37:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:37:44 +0000   Sat, 08 Nov 2025 10:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:37:44 +0000   Sat, 08 Nov 2025 10:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:37:44 +0000   Sat, 08 Nov 2025 10:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:37:44 +0000   Sat, 08 Nov 2025 10:37:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-291044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                53ced70c-1627-4fc9-9eaa-b752fd9e6d98
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-nvtlg                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-no-preload-291044                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-nct2b                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-no-preload-291044             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-no-preload-291044    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-2m8tx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-no-preload-291044             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node no-preload-291044 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node no-preload-291044 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node no-preload-291044 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node no-preload-291044 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node no-preload-291044 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s                kubelet          Node no-preload-291044 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-291044 event: Registered Node no-preload-291044 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-291044 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:36] overlayfs: idmapped layers are currently not supported
	[ +30.788294] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [880af6de3e04685a2e9a389e6ef51e2cd377cf849a903becff34e7812db32d67] <==
	{"level":"warn","ts":"2025-11-08T10:37:07.677387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.714555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.761204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.784242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.804370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.819466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.841882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.861765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.882859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.908326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.924305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.960797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:07.973877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.001910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.030692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.044792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.098913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.139546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.254724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.329693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.347477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.403264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.419897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.458006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:08.608738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60594","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:50 up  9:20,  0 user,  load average: 3.94, 3.90, 3.18
	Linux no-preload-291044 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9ec2b4ef15af92eb5aa453afabd61c9bb65d66331bdb6d3baa072bd2fb1041cb] <==
	I1108 10:37:23.916126       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:37:23.916333       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:37:23.916492       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:37:23.916511       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:37:23.916525       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:37:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:37:24.225536       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:37:24.308526       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:37:24.308635       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:37:24.310294       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 10:37:24.609076       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:37:24.609105       1 metrics.go:72] Registering metrics
	I1108 10:37:24.609210       1 controller.go:711] "Syncing nftables rules"
	I1108 10:37:34.225779       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:37:34.225845       1 main.go:301] handling current node
	I1108 10:37:44.225264       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:37:44.225332       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6fb5dedc48b129cd52484839e0f68fbf4f5d981daad6801f63ba49418980ddd5] <==
	I1108 10:37:10.493266       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 10:37:10.493297       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:37:10.493342       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:37:10.562083       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:37:10.571748       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 10:37:10.603231       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:37:10.603473       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:37:10.957451       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 10:37:10.977496       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 10:37:10.978719       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:37:12.056725       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:37:12.143762       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:37:12.274858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:37:12.297741       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 10:37:12.335258       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1108 10:37:12.336508       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:37:12.353572       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:37:13.092329       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:37:13.117912       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 10:37:13.165619       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 10:37:17.561191       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1108 10:37:18.322103       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:37:18.342831       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:37:18.378041       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1108 10:37:48.029261       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:53384: use of closed network connection
	
	
	==> kube-controller-manager [01373a0149770b097915837cac221de8773a2ecbec10c03367cf412bd0292ff2] <==
	I1108 10:37:17.354680       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:37:17.354805       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 10:37:17.355006       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 10:37:17.355026       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:37:17.356337       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 10:37:17.356748       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:37:17.357997       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:37:17.358204       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:37:17.363824       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:37:17.363916       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:37:17.363967       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:37:17.365208       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-291044" podCIDRs=["10.244.0.0/24"]
	I1108 10:37:17.371441       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:37:17.372561       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:37:17.391452       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:37:17.392649       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:37:17.395830       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 10:37:17.402659       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:37:17.402756       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:37:17.403026       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-291044"
	I1108 10:37:17.403071       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 10:37:17.460911       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:37:17.460936       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:37:17.460944       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:37:37.405712       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e4e56b370645ba5f433ff7feac116cb35ba52ef6a92d1441d04124e9629e3998] <==
	I1108 10:37:18.760098       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:37:18.846381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:37:18.947359       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:37:18.947394       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:37:18.947497       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:37:19.029102       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:37:19.029152       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:37:19.046632       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:37:19.046925       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:37:19.046941       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:37:19.052555       1 config.go:200] "Starting service config controller"
	I1108 10:37:19.052578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:37:19.052598       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:37:19.052602       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:37:19.052610       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:37:19.052614       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:37:19.053250       1 config.go:309] "Starting node config controller"
	I1108 10:37:19.053269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:37:19.053275       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:37:19.153443       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:37:19.153485       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:37:19.153498       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1eb6a6b2da5a51f5f2b89dd22fe89ca5eb0ad4dc452b7c7506626f8a8fc9aa8e] <==
	E1108 10:37:10.655538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:37:10.655647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:37:10.655704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:37:10.655766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:37:10.655845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:37:10.655892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:37:10.655969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:37:10.656023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:37:10.656076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 10:37:10.663618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:37:10.671518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:37:10.671599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:37:10.671689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:37:10.671739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:37:10.671879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:37:10.671926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:37:10.681838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 10:37:11.478280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:37:11.588726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:37:11.615223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:37:11.680042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:37:11.697438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:37:11.702564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:37:12.117274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1108 10:37:14.825110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:37:17 no-preload-291044 kubelet[2008]: I1108 10:37:17.712736    2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef25d22a-5d36-45dd-b9c5-2a78edcf33ef-xtables-lock\") pod \"kube-proxy-2m8tx\" (UID: \"ef25d22a-5d36-45dd-b9c5-2a78edcf33ef\") " pod="kube-system/kube-proxy-2m8tx"
	Nov 08 10:37:17 no-preload-291044 kubelet[2008]: I1108 10:37:17.712753    2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef25d22a-5d36-45dd-b9c5-2a78edcf33ef-lib-modules\") pod \"kube-proxy-2m8tx\" (UID: \"ef25d22a-5d36-45dd-b9c5-2a78edcf33ef\") " pod="kube-system/kube-proxy-2m8tx"
	Nov 08 10:37:17 no-preload-291044 kubelet[2008]: I1108 10:37:17.712775    2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef25d22a-5d36-45dd-b9c5-2a78edcf33ef-kube-proxy\") pod \"kube-proxy-2m8tx\" (UID: \"ef25d22a-5d36-45dd-b9c5-2a78edcf33ef\") " pod="kube-system/kube-proxy-2m8tx"
	Nov 08 10:37:17 no-preload-291044 kubelet[2008]: I1108 10:37:17.712793    2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0bc61516-3295-45ae-8385-f44884db443d-cni-cfg\") pod \"kindnet-nct2b\" (UID: \"0bc61516-3295-45ae-8385-f44884db443d\") " pod="kube-system/kindnet-nct2b"
	Nov 08 10:37:17 no-preload-291044 kubelet[2008]: I1108 10:37:17.712815    2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdnpm\" (UniqueName: \"kubernetes.io/projected/ef25d22a-5d36-45dd-b9c5-2a78edcf33ef-kube-api-access-bdnpm\") pod \"kube-proxy-2m8tx\" (UID: \"ef25d22a-5d36-45dd-b9c5-2a78edcf33ef\") " pod="kube-system/kube-proxy-2m8tx"
	Nov 08 10:37:17 no-preload-291044 kubelet[2008]: I1108 10:37:17.843355    2008 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:37:18 no-preload-291044 kubelet[2008]: W1108 10:37:18.020727    2008 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/crio-a1b1952c8a1b0bc177bb5933122c4b8654210a1001d04edf9a6c98488b7d9ba5 WatchSource:0}: Error finding container a1b1952c8a1b0bc177bb5933122c4b8654210a1001d04edf9a6c98488b7d9ba5: Status 404 returned error can't find the container with id a1b1952c8a1b0bc177bb5933122c4b8654210a1001d04edf9a6c98488b7d9ba5
	Nov 08 10:37:18 no-preload-291044 kubelet[2008]: W1108 10:37:18.037107    2008 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/crio-2265ebf26713db29d06e556815519cb4c4957e3d22e5218f55083f5b0d5dc12c WatchSource:0}: Error finding container 2265ebf26713db29d06e556815519cb4c4957e3d22e5218f55083f5b0d5dc12c: Status 404 returned error can't find the container with id 2265ebf26713db29d06e556815519cb4c4957e3d22e5218f55083f5b0d5dc12c
	Nov 08 10:37:18 no-preload-291044 kubelet[2008]: I1108 10:37:18.645639    2008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2m8tx" podStartSLOduration=1.6456201099999999 podStartE2EDuration="1.64562011s" podCreationTimestamp="2025-11-08 10:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:37:18.532176367 +0000 UTC m=+5.578605614" watchObservedRunningTime="2025-11-08 10:37:18.64562011 +0000 UTC m=+5.692049324"
	Nov 08 10:37:34 no-preload-291044 kubelet[2008]: I1108 10:37:34.515451    2008 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 10:37:34 no-preload-291044 kubelet[2008]: I1108 10:37:34.569112    2008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nct2b" podStartSLOduration=11.842537375 podStartE2EDuration="17.569095809s" podCreationTimestamp="2025-11-08 10:37:17 +0000 UTC" firstStartedPulling="2025-11-08 10:37:18.073385814 +0000 UTC m=+5.119815029" lastFinishedPulling="2025-11-08 10:37:23.799944248 +0000 UTC m=+10.846373463" observedRunningTime="2025-11-08 10:37:24.534651209 +0000 UTC m=+11.581080440" watchObservedRunningTime="2025-11-08 10:37:34.569095809 +0000 UTC m=+21.615525032"
	Nov 08 10:37:34 no-preload-291044 kubelet[2008]: I1108 10:37:34.612418    2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87be45de-22b0-41ae-8e64-a2bbdcdad8cd-config-volume\") pod \"coredns-66bc5c9577-nvtlg\" (UID: \"87be45de-22b0-41ae-8e64-a2bbdcdad8cd\") " pod="kube-system/coredns-66bc5c9577-nvtlg"
	Nov 08 10:37:34 no-preload-291044 kubelet[2008]: I1108 10:37:34.612498    2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flqwh\" (UniqueName: \"kubernetes.io/projected/87be45de-22b0-41ae-8e64-a2bbdcdad8cd-kube-api-access-flqwh\") pod \"coredns-66bc5c9577-nvtlg\" (UID: \"87be45de-22b0-41ae-8e64-a2bbdcdad8cd\") " pod="kube-system/coredns-66bc5c9577-nvtlg"
	Nov 08 10:37:34 no-preload-291044 kubelet[2008]: I1108 10:37:34.612529    2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a4a078b4-83c3-48a1-9d2d-d92b0275ba61-tmp\") pod \"storage-provisioner\" (UID: \"a4a078b4-83c3-48a1-9d2d-d92b0275ba61\") " pod="kube-system/storage-provisioner"
	Nov 08 10:37:34 no-preload-291044 kubelet[2008]: I1108 10:37:34.612549    2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdlz2\" (UniqueName: \"kubernetes.io/projected/a4a078b4-83c3-48a1-9d2d-d92b0275ba61-kube-api-access-qdlz2\") pod \"storage-provisioner\" (UID: \"a4a078b4-83c3-48a1-9d2d-d92b0275ba61\") " pod="kube-system/storage-provisioner"
	Nov 08 10:37:34 no-preload-291044 kubelet[2008]: W1108 10:37:34.943016    2008 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/crio-a3e4decca6d26cf9d488705b08d6e5990aa7c4bc179a1a41adb38f65b1db92ad WatchSource:0}: Error finding container a3e4decca6d26cf9d488705b08d6e5990aa7c4bc179a1a41adb38f65b1db92ad: Status 404 returned error can't find the container with id a3e4decca6d26cf9d488705b08d6e5990aa7c4bc179a1a41adb38f65b1db92ad
	Nov 08 10:37:35 no-preload-291044 kubelet[2008]: I1108 10:37:35.564861    2008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.564833635 podStartE2EDuration="15.564833635s" podCreationTimestamp="2025-11-08 10:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:37:35.545549014 +0000 UTC m=+22.591978237" watchObservedRunningTime="2025-11-08 10:37:35.564833635 +0000 UTC m=+22.611262858"
	Nov 08 10:37:37 no-preload-291044 kubelet[2008]: I1108 10:37:37.905374    2008 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nvtlg" podStartSLOduration=19.905339402 podStartE2EDuration="19.905339402s" podCreationTimestamp="2025-11-08 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:37:35.565505738 +0000 UTC m=+22.611934969" watchObservedRunningTime="2025-11-08 10:37:37.905339402 +0000 UTC m=+24.951768625"
	Nov 08 10:37:37 no-preload-291044 kubelet[2008]: E1108 10:37:37.912519    2008 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-291044\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-291044' and this object" logger="UnhandledError" reflector="object-\"default\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 08 10:37:37 no-preload-291044 kubelet[2008]: E1108 10:37:37.912887    2008 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox\" is forbidden: User \"system:node:no-preload-291044\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-291044' and this object" podUID="19b26969-1d1f-4969-bd57-67043e5a7c30" pod="default/busybox"
	Nov 08 10:37:37 no-preload-291044 kubelet[2008]: I1108 10:37:37.948686    2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ntkf\" (UniqueName: \"kubernetes.io/projected/19b26969-1d1f-4969-bd57-67043e5a7c30-kube-api-access-5ntkf\") pod \"busybox\" (UID: \"19b26969-1d1f-4969-bd57-67043e5a7c30\") " pod="default/busybox"
	Nov 08 10:37:39 no-preload-291044 kubelet[2008]: E1108 10:37:39.060527    2008 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 08 10:37:39 no-preload-291044 kubelet[2008]: E1108 10:37:39.060576    2008 projected.go:196] Error preparing data for projected volume kube-api-access-5ntkf for pod default/busybox: failed to sync configmap cache: timed out waiting for the condition
	Nov 08 10:37:39 no-preload-291044 kubelet[2008]: E1108 10:37:39.060667    2008 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19b26969-1d1f-4969-bd57-67043e5a7c30-kube-api-access-5ntkf podName:19b26969-1d1f-4969-bd57-67043e5a7c30 nodeName:}" failed. No retries permitted until 2025-11-08 10:37:39.560642673 +0000 UTC m=+26.607071888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5ntkf" (UniqueName: "kubernetes.io/projected/19b26969-1d1f-4969-bd57-67043e5a7c30-kube-api-access-5ntkf") pod "busybox" (UID: "19b26969-1d1f-4969-bd57-67043e5a7c30") : failed to sync configmap cache: timed out waiting for the condition
	Nov 08 10:37:39 no-preload-291044 kubelet[2008]: W1108 10:37:39.742416    2008 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/crio-5a93912a1048c549f67240f02646159db8a7661c51e65d0eaeea8bb2eae86951 WatchSource:0}: Error finding container 5a93912a1048c549f67240f02646159db8a7661c51e65d0eaeea8bb2eae86951: Status 404 returned error can't find the container with id 5a93912a1048c549f67240f02646159db8a7661c51e65d0eaeea8bb2eae86951
	
	
	==> storage-provisioner [542087d579b73bd161c8808aaed3fd8e388082303b996f8535400964d7bb317b] <==
	I1108 10:37:35.107213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:37:35.123014       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:37:35.123144       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:37:35.126250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:35.133513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:37:35.133758       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:37:35.133958       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-291044_4b4c15ac-ee1e-4e82-92be-39cecbef2f57!
	I1108 10:37:35.137136       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62d2386e-59b0-4bb3-9886-de4d8f35e247", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-291044_4b4c15ac-ee1e-4e82-92be-39cecbef2f57 became leader
	W1108 10:37:35.144876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:35.150026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:37:35.237063       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-291044_4b4c15ac-ee1e-4e82-92be-39cecbef2f57!
	W1108 10:37:37.160828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:37.166398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:39.169477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:39.176280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:41.179155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:41.183610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:43.189952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:43.196782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:45.201230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:45.208124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:47.210974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:47.217272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:49.223045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:37:49.241840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-291044 -n no-preload-291044
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-291044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-515571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-515571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (348.864239ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:37:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-515571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-515571
helpers_test.go:243: (dbg) docker inspect newest-cni-515571:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d",
	        "Created": "2025-11-08T10:37:25.283274548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1231317,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:37:25.370051084Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/hosts",
	        "LogPath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d-json.log",
	        "Name": "/newest-cni-515571",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-515571:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-515571",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d",
	                "LowerDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223/merged",
	                "UpperDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223/diff",
	                "WorkDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-515571",
	                "Source": "/var/lib/docker/volumes/newest-cni-515571/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-515571",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-515571",
	                "name.minikube.sigs.k8s.io": "newest-cni-515571",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce7353e6de7f254979275a08f698243690c1fa3b3fb445a041194fc0f00dd02d",
	            "SandboxKey": "/var/run/docker/netns/ce7353e6de7f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34542"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34543"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34546"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34544"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34545"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-515571": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:53:43:1e:33:77",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e044b4554ec93678a97772c9b706896f0ba13332a99b10f9f482de6020b370fa",
	                    "EndpointID": "82b307f567b74117158545a4cad573579d9acba37cfebc9051e4c1e22fd99ef6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-515571",
	                        "f94bf5ad2ae9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-515571 -n newest-cni-515571
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-515571 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-515571 logs -n 25: (1.104986058s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-837698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:33 UTC │ 08 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-837698                                                                                                                                                                                                                     │ cert-expiration-837698       │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:34 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-236075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-236075 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:34 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-236075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-790346 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-790346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ image   │ default-k8s-diff-port-236075 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ pause   │ -p default-k8s-diff-port-236075 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-553553                                                                                                                                                                                                               │ disable-driver-mounts-553553 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:37 UTC │
	│ image   │ embed-certs-790346 image list --format=json                                                                                                                                                                                                   │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-790346 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-291044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ stop    │ -p no-preload-291044 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-515571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:37:18
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:37:18.392853 1230576 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:37:18.393080 1230576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:37:18.393111 1230576 out.go:374] Setting ErrFile to fd 2...
	I1108 10:37:18.393132 1230576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:37:18.393427 1230576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:37:18.393888 1230576 out.go:368] Setting JSON to false
	I1108 10:37:18.394917 1230576 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33584,"bootTime":1762564655,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:37:18.395014 1230576 start.go:143] virtualization:  
	I1108 10:37:18.398731 1230576 out.go:179] * [newest-cni-515571] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:37:18.401965 1230576 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:37:18.402038 1230576 notify.go:221] Checking for updates...
	I1108 10:37:18.407869 1230576 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:37:18.410952 1230576 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:37:18.413924 1230576 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:37:18.416906 1230576 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:37:18.419996 1230576 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:37:16.113971 1226201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:16.613578 1226201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:17.113641 1226201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:17.612993 1226201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:18.112987 1226201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:18.569310 1226201 kubeadm.go:1114] duration metric: took 4.397893957s to wait for elevateKubeSystemPrivileges
	I1108 10:37:18.569350 1226201 kubeadm.go:403] duration metric: took 26.002325792s to StartCluster
	I1108 10:37:18.569371 1226201 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:18.569453 1226201 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:37:18.570185 1226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:18.570413 1226201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:37:18.570431 1226201 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:37:18.570983 1226201 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:37:18.571138 1226201 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:37:18.571231 1226201 addons.go:70] Setting storage-provisioner=true in profile "no-preload-291044"
	I1108 10:37:18.571247 1226201 addons.go:239] Setting addon storage-provisioner=true in "no-preload-291044"
	I1108 10:37:18.571269 1226201 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:37:18.571733 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:37:18.571844 1226201 addons.go:70] Setting default-storageclass=true in profile "no-preload-291044"
	I1108 10:37:18.571866 1226201 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-291044"
	I1108 10:37:18.572189 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:37:18.574561 1226201 out.go:179] * Verifying Kubernetes components...
	I1108 10:37:18.422581 1230576 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:37:18.422738 1230576 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:37:18.486460 1230576 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:37:18.486583 1230576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:37:18.627791 1230576 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:37:18.613237435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:37:18.627888 1230576 docker.go:319] overlay module found
	I1108 10:37:18.632679 1230576 out.go:179] * Using the docker driver based on user configuration
	I1108 10:37:18.584077 1226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:37:18.639950 1226201 addons.go:239] Setting addon default-storageclass=true in "no-preload-291044"
	I1108 10:37:18.639991 1226201 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:37:18.644461 1226201 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:37:18.662142 1226201 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:37:18.635524 1230576 start.go:309] selected driver: docker
	I1108 10:37:18.635550 1230576 start.go:930] validating driver "docker" against <nil>
	I1108 10:37:18.635565 1230576 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:37:18.636239 1230576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:37:18.881807 1230576 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:37:18.865224604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:37:18.882003 1230576 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1108 10:37:18.882039 1230576 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1108 10:37:18.882305 1230576 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 10:37:18.888329 1230576 out.go:179] * Using Docker driver with root privileges
	I1108 10:37:18.891174 1230576 cni.go:84] Creating CNI manager for ""
	I1108 10:37:18.891243 1230576 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:37:18.891253 1230576 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:37:18.891338 1230576 start.go:353] cluster config:
	{Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:37:18.894510 1230576 out.go:179] * Starting "newest-cni-515571" primary control-plane node in "newest-cni-515571" cluster
	I1108 10:37:18.898297 1230576 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:37:18.901205 1230576 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:37:18.904191 1230576 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:37:18.904246 1230576 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:37:18.904255 1230576 cache.go:59] Caching tarball of preloaded images
	I1108 10:37:18.904336 1230576 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:37:18.904346 1230576 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:37:18.904475 1230576 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/config.json ...
	I1108 10:37:18.904498 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/config.json: {Name:mk6f54aa92d97a630c1f7d11a4fc88c252cc90db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:18.904679 1230576 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:37:18.931027 1230576 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:37:18.931051 1230576 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:37:18.931064 1230576 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:37:18.931091 1230576 start.go:360] acquireMachinesLock for newest-cni-515571: {Name:mk1ef8d84bc10dec36e1c08ff277aaf3c1e26a13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:37:18.931191 1230576 start.go:364] duration metric: took 85.208µs to acquireMachinesLock for "newest-cni-515571"
	I1108 10:37:18.931216 1230576 start.go:93] Provisioning new machine with config: &{Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:37:18.931287 1230576 start.go:125] createHost starting for "" (driver="docker")
	I1108 10:37:18.665304 1226201 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:37:18.665326 1226201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:37:18.665395 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:37:18.729269 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:37:18.743402 1226201 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:37:18.743425 1226201 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:37:18.743503 1226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:37:18.842090 1226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:37:19.155710 1226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:37:19.287263 1226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:37:19.346896 1226201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:37:19.347009 1226201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:37:20.895063 1226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.739318845s)
	I1108 10:37:20.895110 1226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.607828935s)
	I1108 10:37:20.895400 1226201 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.54837254s)
	I1108 10:37:20.896085 1226201 node_ready.go:35] waiting up to 6m0s for node "no-preload-291044" to be "Ready" ...
	I1108 10:37:20.896303 1226201 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.549377113s)
	I1108 10:37:20.896319 1226201 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 10:37:20.995689 1226201 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 10:37:18.934911 1230576 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:37:18.935162 1230576 start.go:159] libmachine.API.Create for "newest-cni-515571" (driver="docker")
	I1108 10:37:18.935199 1230576 client.go:173] LocalClient.Create starting
	I1108 10:37:18.935258 1230576 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem
	I1108 10:37:18.935289 1230576 main.go:143] libmachine: Decoding PEM data...
	I1108 10:37:18.935303 1230576 main.go:143] libmachine: Parsing certificate...
	I1108 10:37:18.935358 1230576 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem
	I1108 10:37:18.935375 1230576 main.go:143] libmachine: Decoding PEM data...
	I1108 10:37:18.935385 1230576 main.go:143] libmachine: Parsing certificate...
	I1108 10:37:18.935730 1230576 cli_runner.go:164] Run: docker network inspect newest-cni-515571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:37:18.965407 1230576 cli_runner.go:211] docker network inspect newest-cni-515571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:37:18.965483 1230576 network_create.go:284] running [docker network inspect newest-cni-515571] to gather additional debugging logs...
	I1108 10:37:18.965500 1230576 cli_runner.go:164] Run: docker network inspect newest-cni-515571
	W1108 10:37:18.997652 1230576 cli_runner.go:211] docker network inspect newest-cni-515571 returned with exit code 1
	I1108 10:37:18.997678 1230576 network_create.go:287] error running [docker network inspect newest-cni-515571]: docker network inspect newest-cni-515571: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-515571 not found
	I1108 10:37:18.997692 1230576 network_create.go:289] output of [docker network inspect newest-cni-515571]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-515571 not found
	
	** /stderr **
	I1108 10:37:18.997803 1230576 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:37:19.035441 1230576 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f127b1978c3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:c7:37:65:8c:96} reservation:<nil>}
	I1108 10:37:19.035748 1230576 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b98bf73d2e94 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:99:be:46:ea:86} reservation:<nil>}
	I1108 10:37:19.036054 1230576 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c4df73992be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:ad:c1:c0:ea:6d} reservation:<nil>}
	I1108 10:37:19.036512 1230576 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a25150}
	I1108 10:37:19.036532 1230576 network_create.go:124] attempt to create docker network newest-cni-515571 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 10:37:19.036588 1230576 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-515571 newest-cni-515571
	I1108 10:37:19.138125 1230576 network_create.go:108] docker network newest-cni-515571 192.168.76.0/24 created
	I1108 10:37:19.138161 1230576 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-515571" container
	I1108 10:37:19.138241 1230576 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:37:19.172732 1230576 cli_runner.go:164] Run: docker volume create newest-cni-515571 --label name.minikube.sigs.k8s.io=newest-cni-515571 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:37:19.200520 1230576 oci.go:103] Successfully created a docker volume newest-cni-515571
	I1108 10:37:19.200595 1230576 cli_runner.go:164] Run: docker run --rm --name newest-cni-515571-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-515571 --entrypoint /usr/bin/test -v newest-cni-515571:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:37:19.952987 1230576 oci.go:107] Successfully prepared a docker volume newest-cni-515571
	I1108 10:37:19.953028 1230576 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:37:19.953047 1230576 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:37:19.953107 1230576 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-515571:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 10:37:20.998836 1226201 addons.go:515] duration metric: took 2.427679537s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 10:37:21.402506 1226201 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-291044" context rescaled to 1 replicas
	W1108 10:37:22.899541 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	W1108 10:37:25.401220 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	I1108 10:37:25.214432 1230576 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-515571:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (5.261280504s)
	I1108 10:37:25.214465 1230576 kic.go:203] duration metric: took 5.261414884s to extract preloaded images to volume ...
	W1108 10:37:25.214615 1230576 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:37:25.214721 1230576 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:37:25.267818 1230576 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-515571 --name newest-cni-515571 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-515571 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-515571 --network newest-cni-515571 --ip 192.168.76.2 --volume newest-cni-515571:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:37:25.600511 1230576 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Running}}
	I1108 10:37:25.623244 1230576 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:37:25.647547 1230576 cli_runner.go:164] Run: docker exec newest-cni-515571 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:37:25.696626 1230576 oci.go:144] the created container "newest-cni-515571" has a running status.
	I1108 10:37:25.696661 1230576 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa...
	I1108 10:37:26.214314 1230576 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:37:26.238779 1230576 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:37:26.264916 1230576 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:37:26.264936 1230576 kic_runner.go:114] Args: [docker exec --privileged newest-cni-515571 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:37:26.322400 1230576 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:37:26.343415 1230576 machine.go:94] provisionDockerMachine start ...
	I1108 10:37:26.343501 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:26.364502 1230576 main.go:143] libmachine: Using SSH client type: native
	I1108 10:37:26.364931 1230576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1108 10:37:26.364945 1230576 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:37:26.564131 1230576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-515571
	
	I1108 10:37:26.564196 1230576 ubuntu.go:182] provisioning hostname "newest-cni-515571"
	I1108 10:37:26.564292 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:26.584247 1230576 main.go:143] libmachine: Using SSH client type: native
	I1108 10:37:26.584937 1230576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1108 10:37:26.584957 1230576 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-515571 && echo "newest-cni-515571" | sudo tee /etc/hostname
	I1108 10:37:26.759236 1230576 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-515571
	
	I1108 10:37:26.759391 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:26.784910 1230576 main.go:143] libmachine: Using SSH client type: native
	I1108 10:37:26.785210 1230576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1108 10:37:26.785233 1230576 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-515571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-515571/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-515571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:37:26.948474 1230576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:37:26.948503 1230576 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:37:26.948545 1230576 ubuntu.go:190] setting up certificates
	I1108 10:37:26.948561 1230576 provision.go:84] configureAuth start
	I1108 10:37:26.948646 1230576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:37:26.965740 1230576 provision.go:143] copyHostCerts
	I1108 10:37:26.965808 1230576 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:37:26.965820 1230576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:37:26.965900 1230576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:37:26.965999 1230576 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:37:26.966011 1230576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:37:26.966040 1230576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:37:26.966158 1230576 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:37:26.966175 1230576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:37:26.966206 1230576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:37:26.966265 1230576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.newest-cni-515571 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-515571]
	I1108 10:37:27.284812 1230576 provision.go:177] copyRemoteCerts
	I1108 10:37:27.284889 1230576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:37:27.284935 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:27.303896 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:27.408482 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:37:27.426288 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:37:27.445604 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:37:27.467255 1230576 provision.go:87] duration metric: took 518.654501ms to configureAuth
	I1108 10:37:27.467323 1230576 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:37:27.467535 1230576 config.go:182] Loaded profile config "newest-cni-515571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:37:27.467679 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:27.486704 1230576 main.go:143] libmachine: Using SSH client type: native
	I1108 10:37:27.487015 1230576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1108 10:37:27.487029 1230576 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:37:27.778062 1230576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:37:27.778107 1230576 machine.go:97] duration metric: took 1.434655405s to provisionDockerMachine
	I1108 10:37:27.778118 1230576 client.go:176] duration metric: took 8.842913372s to LocalClient.Create
	I1108 10:37:27.778143 1230576 start.go:167] duration metric: took 8.842983023s to libmachine.API.Create "newest-cni-515571"
	I1108 10:37:27.778155 1230576 start.go:293] postStartSetup for "newest-cni-515571" (driver="docker")
	I1108 10:37:27.778165 1230576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:37:27.778229 1230576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:37:27.778276 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:27.794708 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:27.902360 1230576 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:37:27.905819 1230576 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:37:27.905851 1230576 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:37:27.905862 1230576 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:37:27.905919 1230576 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:37:27.906002 1230576 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:37:27.906113 1230576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:37:27.913412 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:37:27.930774 1230576 start.go:296] duration metric: took 152.604091ms for postStartSetup
	I1108 10:37:27.931145 1230576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:37:27.950350 1230576 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/config.json ...
	I1108 10:37:27.950626 1230576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:37:27.950683 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:27.967816 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:28.074537 1230576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:37:28.079965 1230576 start.go:128] duration metric: took 9.148662577s to createHost
	I1108 10:37:28.079990 1230576 start.go:83] releasing machines lock for "newest-cni-515571", held for 9.148790204s
	I1108 10:37:28.080068 1230576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:37:28.099256 1230576 ssh_runner.go:195] Run: cat /version.json
	I1108 10:37:28.099284 1230576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:37:28.099310 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:28.099356 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:28.123345 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:28.128758 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:28.228094 1230576 ssh_runner.go:195] Run: systemctl --version
	I1108 10:37:28.320989 1230576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:37:28.361334 1230576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:37:28.365962 1230576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:37:28.366038 1230576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:37:28.394683 1230576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:37:28.394708 1230576 start.go:496] detecting cgroup driver to use...
	I1108 10:37:28.394740 1230576 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:37:28.394793 1230576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:37:28.412908 1230576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:37:28.426004 1230576 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:37:28.426097 1230576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:37:28.443870 1230576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:37:28.468712 1230576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:37:28.586792 1230576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:37:28.721137 1230576 docker.go:234] disabling docker service ...
	I1108 10:37:28.721274 1230576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:37:28.743459 1230576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:37:28.756318 1230576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:37:28.872135 1230576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:37:29.002976 1230576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:37:29.018233 1230576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:37:29.035122 1230576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:37:29.035232 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.044937 1230576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:37:29.045031 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.055122 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.065483 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.075695 1230576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:37:29.084145 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.093791 1230576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.107749 1230576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:37:29.117281 1230576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:37:29.128807 1230576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:37:29.136823 1230576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:37:29.257248 1230576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:37:29.384706 1230576 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:37:29.384809 1230576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:37:29.388862 1230576 start.go:564] Will wait 60s for crictl version
	I1108 10:37:29.388952 1230576 ssh_runner.go:195] Run: which crictl
	I1108 10:37:29.392588 1230576 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:37:29.419677 1230576 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:37:29.419790 1230576 ssh_runner.go:195] Run: crio --version
	I1108 10:37:29.452988 1230576 ssh_runner.go:195] Run: crio --version
	I1108 10:37:29.490171 1230576 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:37:29.492905 1230576 cli_runner.go:164] Run: docker network inspect newest-cni-515571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:37:29.509482 1230576 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:37:29.512967 1230576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:37:29.525857 1230576 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1108 10:37:27.899412 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	W1108 10:37:30.399371 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	I1108 10:37:29.528735 1230576 kubeadm.go:884] updating cluster {Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:37:29.528862 1230576 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:37:29.528955 1230576 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:37:29.560178 1230576 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:37:29.560198 1230576 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:37:29.560253 1230576 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:37:29.584872 1230576 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:37:29.584895 1230576 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:37:29.584903 1230576 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:37:29.585012 1230576 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-515571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:37:29.585101 1230576 ssh_runner.go:195] Run: crio config
	I1108 10:37:29.646212 1230576 cni.go:84] Creating CNI manager for ""
	I1108 10:37:29.646232 1230576 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:37:29.646253 1230576 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 10:37:29.646279 1230576 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-515571 NodeName:newest-cni-515571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:37:29.646418 1230576 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-515571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:37:29.646500 1230576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:37:29.654084 1230576 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:37:29.654207 1230576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:37:29.662144 1230576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 10:37:29.674858 1230576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:37:29.688308 1230576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1108 10:37:29.701709 1230576 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:37:29.705477 1230576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:37:29.714839 1230576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:37:29.828136 1230576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:37:29.843378 1230576 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571 for IP: 192.168.76.2
	I1108 10:37:29.843399 1230576 certs.go:195] generating shared ca certs ...
	I1108 10:37:29.843416 1230576 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:29.843562 1230576 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:37:29.843609 1230576 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:37:29.843619 1230576 certs.go:257] generating profile certs ...
	I1108 10:37:29.843674 1230576 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.key
	I1108 10:37:29.843691 1230576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.crt with IP's: []
	I1108 10:37:30.305797 1230576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.crt ...
	I1108 10:37:30.305830 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.crt: {Name:mk06ce47763c8d097a4e58e433564ff92524f3e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:30.306064 1230576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.key ...
	I1108 10:37:30.306081 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.key: {Name:mkc9ade5bf32819b647e9e1b1ffb1b7497d9c208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:30.306188 1230576 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key.0dbe4724
	I1108 10:37:30.306208 1230576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt.0dbe4724 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 10:37:30.965292 1230576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt.0dbe4724 ...
	I1108 10:37:30.965332 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt.0dbe4724: {Name:mkec282190793062f5c7282b363c5e9e32bdda76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:30.965551 1230576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key.0dbe4724 ...
	I1108 10:37:30.965577 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key.0dbe4724: {Name:mkd31fec51db04bc2294bf2ccfc9b9fab07e2fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:30.965662 1230576 certs.go:382] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt.0dbe4724 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt
	I1108 10:37:30.965751 1230576 certs.go:386] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key.0dbe4724 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key
	I1108 10:37:30.965820 1230576 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key
	I1108 10:37:30.965842 1230576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.crt with IP's: []
	I1108 10:37:31.308171 1230576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.crt ...
	I1108 10:37:31.308200 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.crt: {Name:mkee8e1f5b7c9323087787065c2706248c72ac63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:31.308394 1230576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key ...
	I1108 10:37:31.308410 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key: {Name:mk8e858113e3ac42afcb7ef83cff271240a882a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:31.308618 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:37:31.308668 1230576 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:37:31.308683 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:37:31.308713 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:37:31.308741 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:37:31.308767 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:37:31.308816 1230576 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:37:31.309382 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:37:31.327953 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:37:31.352239 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:37:31.372340 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:37:31.392586 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 10:37:31.414462 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:37:31.433312 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:37:31.460820 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:37:31.479248 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:37:31.503393 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:37:31.522457 1230576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:37:31.539786 1230576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:37:31.552400 1230576 ssh_runner.go:195] Run: openssl version
	I1108 10:37:31.558898 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:37:31.567140 1230576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:37:31.570658 1230576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:37:31.570736 1230576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:37:31.611617 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:37:31.619892 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:37:31.627778 1230576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:37:31.631200 1230576 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:37:31.631263 1230576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:37:31.672064 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:37:31.680335 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:37:31.688413 1230576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:37:31.692219 1230576 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:37:31.692312 1230576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:37:31.733018 1230576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:37:31.741367 1230576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:37:31.745099 1230576 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:37:31.745170 1230576 kubeadm.go:401] StartCluster: {Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:37:31.745262 1230576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:37:31.745329 1230576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:37:31.770511 1230576 cri.go:89] found id: ""
	I1108 10:37:31.770635 1230576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:37:31.778493 1230576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:37:31.786279 1230576 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:37:31.786395 1230576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:37:31.794084 1230576 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:37:31.794105 1230576 kubeadm.go:158] found existing configuration files:
	
	I1108 10:37:31.794187 1230576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:37:31.801711 1230576 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:37:31.801793 1230576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:37:31.809094 1230576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:37:31.816645 1230576 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:37:31.816746 1230576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:37:31.824108 1230576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:37:31.831781 1230576 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:37:31.831881 1230576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:37:31.839289 1230576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:37:31.846835 1230576 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:37:31.846923 1230576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:37:31.854526 1230576 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:37:31.895498 1230576 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:37:31.895766 1230576 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:37:31.927935 1230576 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:37:31.928041 1230576 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:37:31.928106 1230576 kubeadm.go:319] OS: Linux
	I1108 10:37:31.928192 1230576 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:37:31.928295 1230576 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:37:31.928397 1230576 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:37:31.928553 1230576 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:37:31.928655 1230576 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:37:31.928749 1230576 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:37:31.928829 1230576 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:37:31.928933 1230576 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:37:31.929025 1230576 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:37:31.997449 1230576 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:37:31.997618 1230576 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:37:31.997782 1230576 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:37:32.011986 1230576 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 10:37:32.017952 1230576 out.go:252]   - Generating certificates and keys ...
	I1108 10:37:32.018077 1230576 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:37:32.018161 1230576 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:37:32.891045 1230576 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:37:33.114724 1230576 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1108 10:37:32.400366 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	W1108 10:37:34.404314 1226201 node_ready.go:57] node "no-preload-291044" has "Ready":"False" status (will retry)
	I1108 10:37:34.899908 1226201 node_ready.go:49] node "no-preload-291044" is "Ready"
	I1108 10:37:34.899933 1226201 node_ready.go:38] duration metric: took 14.00383252s for node "no-preload-291044" to be "Ready" ...
	I1108 10:37:34.899948 1226201 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:37:34.900005 1226201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:37:34.913763 1226201 api_server.go:72] duration metric: took 16.343302509s to wait for apiserver process to appear ...
	I1108 10:37:34.913784 1226201 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:37:34.913803 1226201 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:37:34.925303 1226201 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:37:34.928599 1226201 api_server.go:141] control plane version: v1.34.1
	I1108 10:37:34.928631 1226201 api_server.go:131] duration metric: took 14.840232ms to wait for apiserver health ...
	I1108 10:37:34.928641 1226201 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:37:34.937317 1226201 system_pods.go:59] 8 kube-system pods found
	I1108 10:37:34.937397 1226201 system_pods.go:61] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:37:34.937419 1226201 system_pods.go:61] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running
	I1108 10:37:34.937460 1226201 system_pods.go:61] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:37:34.937487 1226201 system_pods.go:61] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running
	I1108 10:37:34.937513 1226201 system_pods.go:61] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running
	I1108 10:37:34.937547 1226201 system_pods.go:61] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:37:34.937570 1226201 system_pods.go:61] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running
	I1108 10:37:34.937596 1226201 system_pods.go:61] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:37:34.937631 1226201 system_pods.go:74] duration metric: took 8.98324ms to wait for pod list to return data ...
	I1108 10:37:34.937657 1226201 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:37:34.944079 1226201 default_sa.go:45] found service account: "default"
	I1108 10:37:34.944151 1226201 default_sa.go:55] duration metric: took 6.474277ms for default service account to be created ...
	I1108 10:37:34.944175 1226201 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:37:34.947653 1226201 system_pods.go:86] 8 kube-system pods found
	I1108 10:37:34.947727 1226201 system_pods.go:89] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:37:34.947748 1226201 system_pods.go:89] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running
	I1108 10:37:34.947774 1226201 system_pods.go:89] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:37:34.947807 1226201 system_pods.go:89] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running
	I1108 10:37:34.947831 1226201 system_pods.go:89] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running
	I1108 10:37:34.947852 1226201 system_pods.go:89] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:37:34.947890 1226201 system_pods.go:89] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running
	I1108 10:37:34.947916 1226201 system_pods.go:89] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:37:34.947964 1226201 retry.go:31] will retry after 241.025595ms: missing components: kube-dns
	I1108 10:37:35.193413 1226201 system_pods.go:86] 8 kube-system pods found
	I1108 10:37:35.193495 1226201 system_pods.go:89] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:37:35.193665 1226201 system_pods.go:89] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running
	I1108 10:37:35.193695 1226201 system_pods.go:89] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:37:35.193718 1226201 system_pods.go:89] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running
	I1108 10:37:35.193754 1226201 system_pods.go:89] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running
	I1108 10:37:35.193777 1226201 system_pods.go:89] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:37:35.193797 1226201 system_pods.go:89] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running
	I1108 10:37:35.193837 1226201 system_pods.go:89] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:37:35.193871 1226201 retry.go:31] will retry after 303.703093ms: missing components: kube-dns
	I1108 10:37:35.503661 1226201 system_pods.go:86] 8 kube-system pods found
	I1108 10:37:35.503806 1226201 system_pods.go:89] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:37:35.503860 1226201 system_pods.go:89] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running
	I1108 10:37:35.503899 1226201 system_pods.go:89] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:37:35.503951 1226201 system_pods.go:89] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running
	I1108 10:37:35.503977 1226201 system_pods.go:89] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running
	I1108 10:37:35.504034 1226201 system_pods.go:89] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:37:35.504058 1226201 system_pods.go:89] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running
	I1108 10:37:35.504083 1226201 system_pods.go:89] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 10:37:35.504151 1226201 retry.go:31] will retry after 396.709987ms: missing components: kube-dns
	I1108 10:37:35.906572 1226201 system_pods.go:86] 8 kube-system pods found
	I1108 10:37:35.906654 1226201 system_pods.go:89] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Running
	I1108 10:37:35.906676 1226201 system_pods.go:89] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running
	I1108 10:37:35.906701 1226201 system_pods.go:89] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:37:35.906736 1226201 system_pods.go:89] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running
	I1108 10:37:35.906761 1226201 system_pods.go:89] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running
	I1108 10:37:35.906783 1226201 system_pods.go:89] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:37:35.906819 1226201 system_pods.go:89] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running
	I1108 10:37:35.906843 1226201 system_pods.go:89] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Running
	I1108 10:37:35.906869 1226201 system_pods.go:126] duration metric: took 962.672956ms to wait for k8s-apps to be running ...
	I1108 10:37:35.906906 1226201 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:37:35.906999 1226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:37:35.924472 1226201 system_svc.go:56] duration metric: took 17.524427ms WaitForService to wait for kubelet
	I1108 10:37:35.924549 1226201 kubeadm.go:587] duration metric: took 17.354081514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:37:35.924581 1226201 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:37:35.928014 1226201 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:37:35.928093 1226201 node_conditions.go:123] node cpu capacity is 2
	I1108 10:37:35.928130 1226201 node_conditions.go:105] duration metric: took 3.51398ms to run NodePressure ...
	I1108 10:37:35.928176 1226201 start.go:242] waiting for startup goroutines ...
	I1108 10:37:35.928205 1226201 start.go:247] waiting for cluster config update ...
	I1108 10:37:35.928235 1226201 start.go:256] writing updated cluster config ...
	I1108 10:37:35.928595 1226201 ssh_runner.go:195] Run: rm -f paused
	I1108 10:37:35.933378 1226201 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:37:35.937156 1226201 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nvtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.943160 1226201 pod_ready.go:94] pod "coredns-66bc5c9577-nvtlg" is "Ready"
	I1108 10:37:35.943183 1226201 pod_ready.go:86] duration metric: took 5.958846ms for pod "coredns-66bc5c9577-nvtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.946277 1226201 pod_ready.go:83] waiting for pod "etcd-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.951906 1226201 pod_ready.go:94] pod "etcd-no-preload-291044" is "Ready"
	I1108 10:37:35.951928 1226201 pod_ready.go:86] duration metric: took 5.630438ms for pod "etcd-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.954737 1226201 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.960329 1226201 pod_ready.go:94] pod "kube-apiserver-no-preload-291044" is "Ready"
	I1108 10:37:35.960402 1226201 pod_ready.go:86] duration metric: took 5.599727ms for pod "kube-apiserver-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:35.963176 1226201 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:36.338552 1226201 pod_ready.go:94] pod "kube-controller-manager-no-preload-291044" is "Ready"
	I1108 10:37:36.338650 1226201 pod_ready.go:86] duration metric: took 375.41436ms for pod "kube-controller-manager-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:36.538024 1226201 pod_ready.go:83] waiting for pod "kube-proxy-2m8tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:36.938570 1226201 pod_ready.go:94] pod "kube-proxy-2m8tx" is "Ready"
	I1108 10:37:36.938602 1226201 pod_ready.go:86] duration metric: took 400.497964ms for pod "kube-proxy-2m8tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:37.138419 1226201 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:37.538233 1226201 pod_ready.go:94] pod "kube-scheduler-no-preload-291044" is "Ready"
	I1108 10:37:37.538266 1226201 pod_ready.go:86] duration metric: took 399.805637ms for pod "kube-scheduler-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:37:37.538280 1226201 pod_ready.go:40] duration metric: took 1.604837138s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:37:37.623398 1226201 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:37:37.626969 1226201 out.go:179] * Done! kubectl is now configured to use "no-preload-291044" cluster and "default" namespace by default
	I1108 10:37:34.404460 1230576 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:37:35.732035 1230576 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 10:37:35.971238 1230576 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:37:35.971584 1230576 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-515571] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:37:36.166246 1230576 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:37:36.166643 1230576 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-515571] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:37:36.678676 1230576 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:37:36.975730 1230576 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:37:37.502450 1230576 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:37:37.502871 1230576 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:37:37.687443 1230576 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:37:38.009115 1230576 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:37:38.114390 1230576 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:37:38.435967 1230576 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:37:38.887824 1230576 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:37:38.888801 1230576 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:37:38.893873 1230576 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:37:38.897309 1230576 out.go:252]   - Booting up control plane ...
	I1108 10:37:38.897418 1230576 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:37:38.903581 1230576 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:37:38.903664 1230576 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:37:38.916897 1230576 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:37:38.917018 1230576 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:37:38.924256 1230576 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:37:38.924668 1230576 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:37:38.924719 1230576 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:37:39.074527 1230576 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:37:39.074694 1230576 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:37:41.074861 1230576 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000848472s
	I1108 10:37:41.080836 1230576 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:37:41.080945 1230576 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 10:37:41.081045 1230576 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:37:41.081139 1230576 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 10:37:44.962892 1230576 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.882805188s
	I1108 10:37:46.280549 1230576 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.201168879s
	I1108 10:37:48.082869 1230576 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003289408s
	I1108 10:37:48.111955 1230576 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:37:48.137932 1230576 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:37:48.175243 1230576 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:37:48.175773 1230576 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-515571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:37:48.197124 1230576 kubeadm.go:319] [bootstrap-token] Using token: plem44.4qp9l46repzins7g
	I1108 10:37:48.200074 1230576 out.go:252]   - Configuring RBAC rules ...
	I1108 10:37:48.200198 1230576 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:37:48.215380 1230576 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:37:48.230836 1230576 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:37:48.246209 1230576 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:37:48.251495 1230576 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:37:48.255705 1230576 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:37:48.493366 1230576 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:37:48.963944 1230576 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:37:49.491475 1230576 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:37:49.493086 1230576 kubeadm.go:319] 
	I1108 10:37:49.493174 1230576 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:37:49.493181 1230576 kubeadm.go:319] 
	I1108 10:37:49.493261 1230576 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:37:49.493266 1230576 kubeadm.go:319] 
	I1108 10:37:49.493292 1230576 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:37:49.493787 1230576 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:37:49.493854 1230576 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:37:49.493860 1230576 kubeadm.go:319] 
	I1108 10:37:49.493916 1230576 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:37:49.493921 1230576 kubeadm.go:319] 
	I1108 10:37:49.493971 1230576 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:37:49.493975 1230576 kubeadm.go:319] 
	I1108 10:37:49.494029 1230576 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:37:49.494115 1230576 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:37:49.494187 1230576 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:37:49.494192 1230576 kubeadm.go:319] 
	I1108 10:37:49.494517 1230576 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:37:49.494603 1230576 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:37:49.494608 1230576 kubeadm.go:319] 
	I1108 10:37:49.494920 1230576 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token plem44.4qp9l46repzins7g \
	I1108 10:37:49.495033 1230576 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 \
	I1108 10:37:49.495263 1230576 kubeadm.go:319] 	--control-plane 
	I1108 10:37:49.495273 1230576 kubeadm.go:319] 
	I1108 10:37:49.495575 1230576 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:37:49.495585 1230576 kubeadm.go:319] 
	I1108 10:37:49.495908 1230576 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token plem44.4qp9l46repzins7g \
	I1108 10:37:49.496195 1230576 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 
	I1108 10:37:49.501935 1230576 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:37:49.502174 1230576 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:37:49.502292 1230576 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 10:37:49.502308 1230576 cni.go:84] Creating CNI manager for ""
	I1108 10:37:49.502315 1230576 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:37:49.505545 1230576 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 10:37:49.508421 1230576 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:37:49.513017 1230576 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 10:37:49.513034 1230576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:37:49.539017 1230576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:37:49.977089 1230576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:37:49.977231 1230576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:49.977302 1230576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-515571 minikube.k8s.io/updated_at=2025_11_08T10_37_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=newest-cni-515571 minikube.k8s.io/primary=true
	I1108 10:37:50.272124 1230576 ops.go:34] apiserver oom_adj: -16
	I1108 10:37:50.272249 1230576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:50.772915 1230576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:51.272559 1230576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:51.772375 1230576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:52.272520 1230576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:52.772298 1230576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:53.272751 1230576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:53.772409 1230576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:37:53.879898 1230576 kubeadm.go:1114] duration metric: took 3.902710154s to wait for elevateKubeSystemPrivileges
	I1108 10:37:53.879922 1230576 kubeadm.go:403] duration metric: took 22.134775605s to StartCluster
	I1108 10:37:53.879938 1230576 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:53.879998 1230576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:37:53.880958 1230576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:37:53.881163 1230576 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:37:53.881299 1230576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:37:53.881581 1230576 config.go:182] Loaded profile config "newest-cni-515571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:37:53.881617 1230576 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:37:53.881677 1230576 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-515571"
	I1108 10:37:53.881690 1230576 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-515571"
	I1108 10:37:53.881712 1230576 host.go:66] Checking if "newest-cni-515571" exists ...
	I1108 10:37:53.882507 1230576 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:37:53.882678 1230576 addons.go:70] Setting default-storageclass=true in profile "newest-cni-515571"
	I1108 10:37:53.882697 1230576 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-515571"
	I1108 10:37:53.882946 1230576 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:37:53.885542 1230576 out.go:179] * Verifying Kubernetes components...
	I1108 10:37:53.889226 1230576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:37:53.927379 1230576 addons.go:239] Setting addon default-storageclass=true in "newest-cni-515571"
	I1108 10:37:53.927425 1230576 host.go:66] Checking if "newest-cni-515571" exists ...
	I1108 10:37:53.933158 1230576 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:37:53.937084 1230576 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:37:53.940267 1230576 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:37:53.940304 1230576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:37:53.940400 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:53.991791 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:53.992688 1230576 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:37:53.992707 1230576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:37:53.992762 1230576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:54.020710 1230576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:37:54.254508 1230576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:37:54.273895 1230576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:37:54.274014 1230576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:37:54.320421 1230576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:37:54.492949 1230576 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:37:54.493067 1230576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:37:54.631024 1230576 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 10:37:54.890547 1230576 api_server.go:72] duration metric: took 1.009356812s to wait for apiserver process to appear ...
	I1108 10:37:54.890609 1230576 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:37:54.890642 1230576 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:37:54.893476 1230576 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1108 10:37:54.897121 1230576 addons.go:515] duration metric: took 1.015487707s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1108 10:37:54.900800 1230576 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:37:54.902470 1230576 api_server.go:141] control plane version: v1.34.1
	I1108 10:37:54.902493 1230576 api_server.go:131] duration metric: took 11.863288ms to wait for apiserver health ...
	I1108 10:37:54.902502 1230576 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:37:54.906156 1230576 system_pods.go:59] 8 kube-system pods found
	I1108 10:37:54.906193 1230576 system_pods.go:61] "coredns-66bc5c9577-tzpcv" [e29d787c-07fa-45a9-8486-67e87bde431e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 10:37:54.906201 1230576 system_pods.go:61] "etcd-newest-cni-515571" [5340f708-b23d-4f0b-bda7-995b964333e2] Running
	I1108 10:37:54.906247 1230576 system_pods.go:61] "kindnet-6vtjh" [69f8e634-a5cb-438a-a6ac-5762a43d39e5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 10:37:54.906264 1230576 system_pods.go:61] "kube-apiserver-newest-cni-515571" [82e0acec-a5e0-43ed-b26f-072f360ced86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:37:54.906272 1230576 system_pods.go:61] "kube-controller-manager-newest-cni-515571" [3966d3a4-3fac-4d01-858a-27ad292e0b25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:37:54.906287 1230576 system_pods.go:61] "kube-proxy-cqlhl" [0385ed05-d22d-4bb0-b165-eeb7226e70fd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 10:37:54.906320 1230576 system_pods.go:61] "kube-scheduler-newest-cni-515571" [6e339344-0ff3-412a-b78f-55ef23e04a9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:37:54.906333 1230576 system_pods.go:61] "storage-provisioner" [db0e8015-0d1b-4030-ad64-744fe3afd379] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 10:37:54.906351 1230576 system_pods.go:74] duration metric: took 3.832969ms to wait for pod list to return data ...
	I1108 10:37:54.906367 1230576 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:37:54.908907 1230576 default_sa.go:45] found service account: "default"
	I1108 10:37:54.908934 1230576 default_sa.go:55] duration metric: took 2.560302ms for default service account to be created ...
	I1108 10:37:54.908946 1230576 kubeadm.go:587] duration metric: took 1.027760747s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 10:37:54.908963 1230576 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:37:54.911404 1230576 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:37:54.911434 1230576 node_conditions.go:123] node cpu capacity is 2
	I1108 10:37:54.911448 1230576 node_conditions.go:105] duration metric: took 2.479549ms to run NodePressure ...
	I1108 10:37:54.911460 1230576 start.go:242] waiting for startup goroutines ...
	I1108 10:37:55.137186 1230576 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-515571" context rescaled to 1 replicas
	I1108 10:37:55.137237 1230576 start.go:247] waiting for cluster config update ...
	I1108 10:37:55.137251 1230576 start.go:256] writing updated cluster config ...
	I1108 10:37:55.137614 1230576 ssh_runner.go:195] Run: rm -f paused
	I1108 10:37:55.202209 1230576 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:37:55.207895 1230576 out.go:179] * Done! kubectl is now configured to use "newest-cni-515571" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.037027259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.040478045Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1cf077bb-8342-4844-bddd-429a1b472812 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.044177504Z" level=info msg="Ran pod sandbox 2f530a989050b5da55f1d870e8c4ea0619457c058a34dcf1526c2e88a9631d65 with infra container: kube-system/kindnet-6vtjh/POD" id=1cf077bb-8342-4844-bddd-429a1b472812 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.045789658Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3fce7bfc-ff59-4a23-ae29-41754d0f3045 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.047101167Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e61f5d02-65c4-4736-9717-7f90f3042b7b name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.053097535Z" level=info msg="Creating container: kube-system/kindnet-6vtjh/kindnet-cni" id=c552ef24-8c86-46ca-9396-62a555aa59ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.053206266Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.05738197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.057860348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.072856186Z" level=info msg="Created container 7c67729772a20e47e30832866885a98114dc8c054b52feaa5b7f740c4e16d2ac: kube-system/kindnet-6vtjh/kindnet-cni" id=c552ef24-8c86-46ca-9396-62a555aa59ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.076023288Z" level=info msg="Starting container: 7c67729772a20e47e30832866885a98114dc8c054b52feaa5b7f740c4e16d2ac" id=e6d17c2f-a345-46b8-8223-126f5b62f165 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.080265796Z" level=info msg="Started container" PID=1504 containerID=7c67729772a20e47e30832866885a98114dc8c054b52feaa5b7f740c4e16d2ac description=kube-system/kindnet-6vtjh/kindnet-cni id=e6d17c2f-a345-46b8-8223-126f5b62f165 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f530a989050b5da55f1d870e8c4ea0619457c058a34dcf1526c2e88a9631d65
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.88319666Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-cqlhl/POD" id=25565b7b-8fce-4759-9cc8-934cd446b150 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.883278988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.88697423Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=25565b7b-8fce-4759-9cc8-934cd446b150 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.892480332Z" level=info msg="Ran pod sandbox 4f23f8b22715cc5146307d162e1faa144161f94dc4b4dbcd2c7fa80db6550a72 with infra container: kube-system/kube-proxy-cqlhl/POD" id=25565b7b-8fce-4759-9cc8-934cd446b150 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.894427687Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e4ae581e-2518-4372-a2de-9e1ffe91045f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.896402234Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a42e1c47-4080-4ba6-b111-d317f7d316ca name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.901544063Z" level=info msg="Creating container: kube-system/kube-proxy-cqlhl/kube-proxy" id=a0554478-898d-4dc1-98bd-dd52c0c46644 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.901672863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.909640479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.910167233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.940883975Z" level=info msg="Created container e7c57b5805589d9f544b4816815ad05d29a21d53f621e956bddecffa61d19b9c: kube-system/kube-proxy-cqlhl/kube-proxy" id=a0554478-898d-4dc1-98bd-dd52c0c46644 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.941776256Z" level=info msg="Starting container: e7c57b5805589d9f544b4816815ad05d29a21d53f621e956bddecffa61d19b9c" id=f4e3f851-6c6e-4c7a-9973-d0a18eae2f75 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:37:55 newest-cni-515571 crio[838]: time="2025-11-08T10:37:55.95037829Z" level=info msg="Started container" PID=1592 containerID=e7c57b5805589d9f544b4816815ad05d29a21d53f621e956bddecffa61d19b9c description=kube-system/kube-proxy-cqlhl/kube-proxy id=f4e3f851-6c6e-4c7a-9973-d0a18eae2f75 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f23f8b22715cc5146307d162e1faa144161f94dc4b4dbcd2c7fa80db6550a72
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e7c57b5805589       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   Less than a second ago   Running             kube-proxy                0                   4f23f8b22715c       kube-proxy-cqlhl                            kube-system
	7c67729772a20       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago             Running             kindnet-cni               0                   2f530a989050b       kindnet-6vtjh                               kube-system
	a52f862863db1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago           Running             kube-scheduler            0                   e518f81f2f39a       kube-scheduler-newest-cni-515571            kube-system
	49d8c665fd277       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago           Running             kube-apiserver            0                   ff602681798bb       kube-apiserver-newest-cni-515571            kube-system
	ac10b4b25ec96       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago           Running             kube-controller-manager   0                   181f5e65d00ef       kube-controller-manager-newest-cni-515571   kube-system
	57d264753d2f8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago           Running             etcd                      0                   3276b196bdd61       etcd-newest-cni-515571                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-515571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-515571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=newest-cni-515571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_37_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:37:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-515571
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:37:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:37:49 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:37:49 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:37:49 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 10:37:49 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-515571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                da96ae8e-28b2-4384-8ee4-16fe0d13fbbb
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-515571                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7s
	  kube-system                 kindnet-6vtjh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-515571             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-515571    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-cqlhl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-515571             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 0s    kube-proxy       
	  Normal   Starting                 7s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 7s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7s    kubelet          Node newest-cni-515571 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s    kubelet          Node newest-cni-515571 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s    kubelet          Node newest-cni-515571 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s    node-controller  Node newest-cni-515571 event: Registered Node newest-cni-515571 in Controller
	
	
	==> dmesg <==
	[Nov 8 10:16] overlayfs: idmapped layers are currently not supported
	[ +45.742765] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:36] overlayfs: idmapped layers are currently not supported
	[ +30.788294] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [57d264753d2f85a5abbea88eaa5c3dbcd1a47c6e19dedc986b93c56f580a794e] <==
	{"level":"warn","ts":"2025-11-08T10:37:44.278352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.310747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.348532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.379795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.424754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.440560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.472170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.509673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.536735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.581129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.609761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.622177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.691065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.722026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.773512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.779096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.824665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.844234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.874059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.903289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.944897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:44.976633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:45.012560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:45.017222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:37:45.131847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58196","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:56 up  9:20,  0 user,  load average: 3.63, 3.84, 3.16
	Linux newest-cni-515571 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7c67729772a20e47e30832866885a98114dc8c054b52feaa5b7f740c4e16d2ac] <==
	I1108 10:37:55.117009       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:37:55.117237       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:37:55.117372       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:37:55.117390       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:37:55.117404       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:37:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:37:55.419364       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:37:55.419403       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:37:55.419419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:37:55.419873       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [49d8c665fd277526de60c33912693ed358c60167e12371cf03a46e355bb1979e] <==
	I1108 10:37:46.239531       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 10:37:46.239560       1 cache.go:39] Caches are synced for autoregister controller
	I1108 10:37:46.246467       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:37:46.265023       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:37:46.265200       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 10:37:46.306172       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:37:46.310036       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:37:46.312249       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:37:46.933205       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 10:37:46.940374       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 10:37:46.940400       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:37:47.681901       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:37:47.733536       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:37:47.876684       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 10:37:47.886690       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1108 10:37:47.887947       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:37:47.892964       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:37:47.978803       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:37:48.921657       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:37:48.957069       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 10:37:48.977386       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 10:37:53.683625       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:37:53.688272       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:37:53.839094       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:37:53.985313       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ac10b4b25ec96168ee01a3f2f763bc32cae0da70b77f55097890a7ffab5cca57] <==
	I1108 10:37:52.975544       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:37:52.975589       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:37:52.976060       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:37:52.977830       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:37:52.979422       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:37:52.979597       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:37:52.979964       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:37:52.980401       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:37:52.980464       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 10:37:52.990552       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 10:37:53.007860       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 10:37:53.007948       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 10:37:53.007980       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 10:37:53.008002       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 10:37:53.008009       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 10:37:53.017088       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-515571" podCIDRs=["10.42.0.0/24"]
	I1108 10:37:53.022032       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:37:53.024325       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 10:37:53.024351       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 10:37:53.025452       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:37:53.025504       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:37:53.027727       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:37:53.032293       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:37:53.032296       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 10:37:53.032507       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	
	
	==> kube-proxy [e7c57b5805589d9f544b4816815ad05d29a21d53f621e956bddecffa61d19b9c] <==
	I1108 10:37:56.004559       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:37:56.098147       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:37:56.198318       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:37:56.198435       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:37:56.198547       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:37:56.226195       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:37:56.226304       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:37:56.230369       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:37:56.232064       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:37:56.232276       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:37:56.233726       1 config.go:200] "Starting service config controller"
	I1108 10:37:56.233814       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:37:56.233861       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:37:56.233888       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:37:56.233935       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:37:56.233963       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:37:56.234604       1 config.go:309] "Starting node config controller"
	I1108 10:37:56.234655       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:37:56.234684       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:37:56.334334       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:37:56.334373       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:37:56.334409       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a52f862863db1572c6da41c4d81ed1116a14a8a4f98558db9cab7ab1268d5eb2] <==
	E1108 10:37:46.279018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:37:46.279071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:37:46.291589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 10:37:46.291677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 10:37:46.291589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 10:37:46.291743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:37:46.291786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 10:37:46.291832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:37:46.291946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:37:46.292053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 10:37:46.292162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 10:37:46.302725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 10:37:46.302868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:37:46.302958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:37:47.093298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 10:37:47.107075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 10:37:47.211930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 10:37:47.243224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 10:37:47.275427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 10:37:47.331936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 10:37:47.356494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 10:37:47.370670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 10:37:47.408426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 10:37:47.417943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1108 10:37:49.145414       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:37:50 newest-cni-515571 kubelet[1308]: I1108 10:37:50.043422    1308 apiserver.go:52] "Watching apiserver"
	Nov 08 10:37:50 newest-cni-515571 kubelet[1308]: I1108 10:37:50.088090    1308 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 10:37:50 newest-cni-515571 kubelet[1308]: I1108 10:37:50.315154    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-515571" podStartSLOduration=1.315136178 podStartE2EDuration="1.315136178s" podCreationTimestamp="2025-11-08 10:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:37:50.261923669 +0000 UTC m=+1.399211443" watchObservedRunningTime="2025-11-08 10:37:50.315136178 +0000 UTC m=+1.452423920"
	Nov 08 10:37:50 newest-cni-515571 kubelet[1308]: I1108 10:37:50.387973    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-515571" podStartSLOduration=1.387946752 podStartE2EDuration="1.387946752s" podCreationTimestamp="2025-11-08 10:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:37:50.316986371 +0000 UTC m=+1.454274146" watchObservedRunningTime="2025-11-08 10:37:50.387946752 +0000 UTC m=+1.525234502"
	Nov 08 10:37:50 newest-cni-515571 kubelet[1308]: I1108 10:37:50.516895    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-515571" podStartSLOduration=1.516859891 podStartE2EDuration="1.516859891s" podCreationTimestamp="2025-11-08 10:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:37:50.39537447 +0000 UTC m=+1.532662228" watchObservedRunningTime="2025-11-08 10:37:50.516859891 +0000 UTC m=+1.654147641"
	Nov 08 10:37:50 newest-cni-515571 kubelet[1308]: I1108 10:37:50.594204    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-515571" podStartSLOduration=1.59416428 podStartE2EDuration="1.59416428s" podCreationTimestamp="2025-11-08 10:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:37:50.521289002 +0000 UTC m=+1.658576761" watchObservedRunningTime="2025-11-08 10:37:50.59416428 +0000 UTC m=+1.731452030"
	Nov 08 10:37:53 newest-cni-515571 kubelet[1308]: I1108 10:37:53.079118    1308 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 10:37:53 newest-cni-515571 kubelet[1308]: I1108 10:37:53.080209    1308 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: E1108 10:37:54.100177    1308 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:newest-cni-515571\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-515571' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: E1108 10:37:54.100252    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-cqlhl\" is forbidden: User \"system:node:newest-cni-515571\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-515571' and this object" podUID="0385ed05-d22d-4bb0-b165-eeb7226e70fd" pod="kube-system/kube-proxy-cqlhl"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: E1108 10:37:54.100350    1308 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-515571\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-515571' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: I1108 10:37:54.150022    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0385ed05-d22d-4bb0-b165-eeb7226e70fd-kube-proxy\") pod \"kube-proxy-cqlhl\" (UID: \"0385ed05-d22d-4bb0-b165-eeb7226e70fd\") " pod="kube-system/kube-proxy-cqlhl"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: I1108 10:37:54.150165    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0385ed05-d22d-4bb0-b165-eeb7226e70fd-xtables-lock\") pod \"kube-proxy-cqlhl\" (UID: \"0385ed05-d22d-4bb0-b165-eeb7226e70fd\") " pod="kube-system/kube-proxy-cqlhl"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: I1108 10:37:54.150191    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0385ed05-d22d-4bb0-b165-eeb7226e70fd-lib-modules\") pod \"kube-proxy-cqlhl\" (UID: \"0385ed05-d22d-4bb0-b165-eeb7226e70fd\") " pod="kube-system/kube-proxy-cqlhl"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: I1108 10:37:54.150212    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k45bh\" (UniqueName: \"kubernetes.io/projected/0385ed05-d22d-4bb0-b165-eeb7226e70fd-kube-api-access-k45bh\") pod \"kube-proxy-cqlhl\" (UID: \"0385ed05-d22d-4bb0-b165-eeb7226e70fd\") " pod="kube-system/kube-proxy-cqlhl"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: I1108 10:37:54.251509    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69f8e634-a5cb-438a-a6ac-5762a43d39e5-xtables-lock\") pod \"kindnet-6vtjh\" (UID: \"69f8e634-a5cb-438a-a6ac-5762a43d39e5\") " pod="kube-system/kindnet-6vtjh"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: I1108 10:37:54.251560    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69f8e634-a5cb-438a-a6ac-5762a43d39e5-lib-modules\") pod \"kindnet-6vtjh\" (UID: \"69f8e634-a5cb-438a-a6ac-5762a43d39e5\") " pod="kube-system/kindnet-6vtjh"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: I1108 10:37:54.251579    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/69f8e634-a5cb-438a-a6ac-5762a43d39e5-cni-cfg\") pod \"kindnet-6vtjh\" (UID: \"69f8e634-a5cb-438a-a6ac-5762a43d39e5\") " pod="kube-system/kindnet-6vtjh"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: I1108 10:37:54.251612    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28jx7\" (UniqueName: \"kubernetes.io/projected/69f8e634-a5cb-438a-a6ac-5762a43d39e5-kube-api-access-28jx7\") pod \"kindnet-6vtjh\" (UID: \"69f8e634-a5cb-438a-a6ac-5762a43d39e5\") " pod="kube-system/kindnet-6vtjh"
	Nov 08 10:37:54 newest-cni-515571 kubelet[1308]: I1108 10:37:54.933722    1308 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:37:55 newest-cni-515571 kubelet[1308]: W1108 10:37:55.044620    1308 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/crio-2f530a989050b5da55f1d870e8c4ea0619457c058a34dcf1526c2e88a9631d65 WatchSource:0}: Error finding container 2f530a989050b5da55f1d870e8c4ea0619457c058a34dcf1526c2e88a9631d65: Status 404 returned error can't find the container with id 2f530a989050b5da55f1d870e8c4ea0619457c058a34dcf1526c2e88a9631d65
	Nov 08 10:37:55 newest-cni-515571 kubelet[1308]: E1108 10:37:55.255738    1308 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 08 10:37:55 newest-cni-515571 kubelet[1308]: E1108 10:37:55.256327    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0385ed05-d22d-4bb0-b165-eeb7226e70fd-kube-proxy podName:0385ed05-d22d-4bb0-b165-eeb7226e70fd nodeName:}" failed. No retries permitted until 2025-11-08 10:37:55.756299977 +0000 UTC m=+6.893587719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0385ed05-d22d-4bb0-b165-eeb7226e70fd-kube-proxy") pod "kube-proxy-cqlhl" (UID: "0385ed05-d22d-4bb0-b165-eeb7226e70fd") : failed to sync configmap cache: timed out waiting for the condition
	Nov 08 10:37:55 newest-cni-515571 kubelet[1308]: I1108 10:37:55.335184    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6vtjh" podStartSLOduration=1.335164034 podStartE2EDuration="1.335164034s" podCreationTimestamp="2025-11-08 10:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:37:55.301706056 +0000 UTC m=+6.438993839" watchObservedRunningTime="2025-11-08 10:37:55.335164034 +0000 UTC m=+6.472451776"
	Nov 08 10:37:56 newest-cni-515571 kubelet[1308]: I1108 10:37:56.293709    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cqlhl" podStartSLOduration=2.293688217 podStartE2EDuration="2.293688217s" podCreationTimestamp="2025-11-08 10:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 10:37:56.277806645 +0000 UTC m=+7.415094387" watchObservedRunningTime="2025-11-08 10:37:56.293688217 +0000 UTC m=+7.430975967"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-515571 -n newest-cni-515571
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-515571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-tzpcv storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-515571 describe pod coredns-66bc5c9577-tzpcv storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-515571 describe pod coredns-66bc5c9577-tzpcv storage-provisioner: exit status 1 (79.809458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-tzpcv" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-515571 describe pod coredns-66bc5c9577-tzpcv storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-515571 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-515571 --alsologtostderr -v=1: exit status 80 (2.334446959s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-515571 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:38:19.012816 1238058 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:38:19.013053 1238058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:38:19.013082 1238058 out.go:374] Setting ErrFile to fd 2...
	I1108 10:38:19.013103 1238058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:38:19.013401 1238058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:38:19.013718 1238058 out.go:368] Setting JSON to false
	I1108 10:38:19.013788 1238058 mustload.go:66] Loading cluster: newest-cni-515571
	I1108 10:38:19.014230 1238058 config.go:182] Loaded profile config "newest-cni-515571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:19.014747 1238058 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:38:19.033165 1238058 host.go:66] Checking if "newest-cni-515571" exists ...
	I1108 10:38:19.033902 1238058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:38:19.133867 1238058 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-08 10:38:19.12173604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:38:19.134555 1238058 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-515571 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:38:19.137966 1238058 out.go:179] * Pausing node newest-cni-515571 ... 
	I1108 10:38:19.140851 1238058 host.go:66] Checking if "newest-cni-515571" exists ...
	I1108 10:38:19.141176 1238058 ssh_runner.go:195] Run: systemctl --version
	I1108 10:38:19.141221 1238058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:19.167780 1238058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:19.301214 1238058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:38:19.324844 1238058 pause.go:52] kubelet running: true
	I1108 10:38:19.324929 1238058 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:38:19.720021 1238058 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:38:19.720116 1238058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:38:19.864318 1238058 cri.go:89] found id: "93511cf560575ebe917dce5846ff27235243c676c68ce71935565137b991bee0"
	I1108 10:38:19.864344 1238058 cri.go:89] found id: "aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345"
	I1108 10:38:19.864349 1238058 cri.go:89] found id: "93d0c8e070cb668a489e9ad2a2665a4c28e5a124650ce0d95549a343c79037a0"
	I1108 10:38:19.864353 1238058 cri.go:89] found id: "38bc479dedc5ae4fd9d713123be920853a980f8e2e86f024661007578f58babe"
	I1108 10:38:19.864356 1238058 cri.go:89] found id: "02f8d0ac9dba3db69b485cb9b56006f12f108e27f5767ecbcca542963009eec6"
	I1108 10:38:19.864360 1238058 cri.go:89] found id: "8028c5744e9c2fa0cbfd055e941f992b8050ed81b1668d7cdfad5fcf592a4fea"
	I1108 10:38:19.864362 1238058 cri.go:89] found id: ""
	I1108 10:38:19.864409 1238058 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:38:19.880987 1238058 retry.go:31] will retry after 327.872328ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:38:19Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:38:20.209310 1238058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:38:20.233432 1238058 pause.go:52] kubelet running: false
	I1108 10:38:20.233572 1238058 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:38:20.440400 1238058 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:38:20.440557 1238058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:38:20.581407 1238058 cri.go:89] found id: "93511cf560575ebe917dce5846ff27235243c676c68ce71935565137b991bee0"
	I1108 10:38:20.581474 1238058 cri.go:89] found id: "aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345"
	I1108 10:38:20.581509 1238058 cri.go:89] found id: "93d0c8e070cb668a489e9ad2a2665a4c28e5a124650ce0d95549a343c79037a0"
	I1108 10:38:20.581534 1238058 cri.go:89] found id: "38bc479dedc5ae4fd9d713123be920853a980f8e2e86f024661007578f58babe"
	I1108 10:38:20.581552 1238058 cri.go:89] found id: "02f8d0ac9dba3db69b485cb9b56006f12f108e27f5767ecbcca542963009eec6"
	I1108 10:38:20.581589 1238058 cri.go:89] found id: "8028c5744e9c2fa0cbfd055e941f992b8050ed81b1668d7cdfad5fcf592a4fea"
	I1108 10:38:20.581612 1238058 cri.go:89] found id: ""
	I1108 10:38:20.581695 1238058 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:38:20.598218 1238058 retry.go:31] will retry after 287.975968ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:38:20Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:38:20.886747 1238058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:38:20.907937 1238058 pause.go:52] kubelet running: false
	I1108 10:38:20.908052 1238058 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:38:21.132004 1238058 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:38:21.132143 1238058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:38:21.223005 1238058 cri.go:89] found id: "93511cf560575ebe917dce5846ff27235243c676c68ce71935565137b991bee0"
	I1108 10:38:21.223076 1238058 cri.go:89] found id: "aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345"
	I1108 10:38:21.223098 1238058 cri.go:89] found id: "93d0c8e070cb668a489e9ad2a2665a4c28e5a124650ce0d95549a343c79037a0"
	I1108 10:38:21.223119 1238058 cri.go:89] found id: "38bc479dedc5ae4fd9d713123be920853a980f8e2e86f024661007578f58babe"
	I1108 10:38:21.223152 1238058 cri.go:89] found id: "02f8d0ac9dba3db69b485cb9b56006f12f108e27f5767ecbcca542963009eec6"
	I1108 10:38:21.223174 1238058 cri.go:89] found id: "8028c5744e9c2fa0cbfd055e941f992b8050ed81b1668d7cdfad5fcf592a4fea"
	I1108 10:38:21.223193 1238058 cri.go:89] found id: ""
	I1108 10:38:21.223287 1238058 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:38:21.238318 1238058 out.go:203] 
	W1108 10:38:21.241217 1238058 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:38:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:38:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:38:21.241241 1238058 out.go:285] * 
	* 
	W1108 10:38:21.256943 1238058 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:38:21.259743 1238058 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-515571 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-515571
helpers_test.go:243: (dbg) docker inspect newest-cni-515571:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d",
	        "Created": "2025-11-08T10:37:25.283274548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1234885,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:37:59.480132826Z",
	            "FinishedAt": "2025-11-08T10:37:58.532191531Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/hosts",
	        "LogPath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d-json.log",
	        "Name": "/newest-cni-515571",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-515571:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-515571",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d",
	                "LowerDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223/merged",
	                "UpperDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223/diff",
	                "WorkDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-515571",
	                "Source": "/var/lib/docker/volumes/newest-cni-515571/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-515571",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-515571",
	                "name.minikube.sigs.k8s.io": "newest-cni-515571",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebdeed8d0947e803b687dc81d80784544dc38bc1cb0503c1592f4d39912e5df2",
	            "SandboxKey": "/var/run/docker/netns/ebdeed8d0947",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34547"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34548"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34551"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34549"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34550"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-515571": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:19:de:c7:07:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e044b4554ec93678a97772c9b706896f0ba13332a99b10f9f482de6020b370fa",
	                    "EndpointID": "699767b60e0d101f6eb70897e8c0ede638428cc09776a63fab1a0f38f2e901c6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-515571",
	                        "f94bf5ad2ae9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-515571 -n newest-cni-515571
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-515571 -n newest-cni-515571: exit status 2 (547.336653ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-515571 logs -n 25
E1108 10:38:22.712785 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-515571 logs -n 25: (1.496907416s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-790346 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-790346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ image   │ default-k8s-diff-port-236075 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ pause   │ -p default-k8s-diff-port-236075 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-553553                                                                                                                                                                                                               │ disable-driver-mounts-553553 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:37 UTC │
	│ image   │ embed-certs-790346 image list --format=json                                                                                                                                                                                                   │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-790346 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-291044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ stop    │ -p no-preload-291044 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p newest-cni-515571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ stop    │ -p newest-cni-515571 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-515571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:38 UTC │
	│ addons  │ enable dashboard -p no-preload-291044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │                     │
	│ image   │ newest-cni-515571 image list --format=json                                                                                                                                                                                                    │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-515571 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:38:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:38:03.479591 1235505 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:38:03.480146 1235505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:38:03.480181 1235505 out.go:374] Setting ErrFile to fd 2...
	I1108 10:38:03.480203 1235505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:38:03.480508 1235505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:38:03.480906 1235505 out.go:368] Setting JSON to false
	I1108 10:38:03.481802 1235505 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33629,"bootTime":1762564655,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:38:03.481899 1235505 start.go:143] virtualization:  
	I1108 10:38:03.486959 1235505 out.go:179] * [no-preload-291044] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:38:03.490063 1235505 notify.go:221] Checking for updates...
	I1108 10:38:03.490981 1235505 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:38:03.493800 1235505 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:38:03.496614 1235505 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:03.499540 1235505 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:38:03.502539 1235505 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:38:03.505348 1235505 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:38:03.508688 1235505 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:03.509246 1235505 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:38:03.541902 1235505 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:38:03.542015 1235505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:38:03.610704 1235505 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:38:03.600157627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:38:03.610805 1235505 docker.go:319] overlay module found
	I1108 10:38:03.614031 1235505 out.go:179] * Using the docker driver based on existing profile
	I1108 10:38:03.616858 1235505 start.go:309] selected driver: docker
	I1108 10:38:03.616877 1235505 start.go:930] validating driver "docker" against &{Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:38:03.616982 1235505 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:38:03.617660 1235505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:38:03.682872 1235505 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:38:03.673713347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:38:03.683210 1235505 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:38:03.683244 1235505 cni.go:84] Creating CNI manager for ""
	I1108 10:38:03.683299 1235505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:38:03.683343 1235505 start.go:353] cluster config:
	{Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:38:03.686538 1235505 out.go:179] * Starting "no-preload-291044" primary control-plane node in "no-preload-291044" cluster
	I1108 10:38:03.689364 1235505 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:38:03.692296 1235505 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:38:03.695162 1235505 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:03.695306 1235505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/config.json ...
	I1108 10:38:03.695658 1235505 cache.go:107] acquiring lock: {Name:mk8513c6159258582048bf022eb3626495f0ef99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.695747 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 10:38:03.695762 1235505 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.788µs
	I1108 10:38:03.695770 1235505 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 10:38:03.695785 1235505 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:38:03.695983 1235505 cache.go:107] acquiring lock: {Name:mkc673276c059e1336edcaed46b38c8432a558c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696048 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1108 10:38:03.696056 1235505 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 78.414µs
	I1108 10:38:03.696063 1235505 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1108 10:38:03.696083 1235505 cache.go:107] acquiring lock: {Name:mkfbe116f289c09e7f023243a3e334812266f562 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696120 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1108 10:38:03.696125 1235505 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 52.479µs
	I1108 10:38:03.696131 1235505 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1108 10:38:03.696141 1235505 cache.go:107] acquiring lock: {Name:mkab778ec210a01a148a027551ae4dd6f48ac681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696168 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1108 10:38:03.696173 1235505 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 33.706µs
	I1108 10:38:03.696179 1235505 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1108 10:38:03.696187 1235505 cache.go:107] acquiring lock: {Name:mk7e5c4997cde36ed0e08a0661a5a5dfada4e032 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696212 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1108 10:38:03.696217 1235505 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.769µs
	I1108 10:38:03.696223 1235505 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1108 10:38:03.696233 1235505 cache.go:107] acquiring lock: {Name:mkde9e8ad2f329aff2c9e641a9eec6a25ba01057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696257 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1108 10:38:03.696262 1235505 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 29.743µs
	I1108 10:38:03.696267 1235505 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1108 10:38:03.696275 1235505 cache.go:107] acquiring lock: {Name:mk0c87ccf4c259c637cc851ae066ca5ca4e4afa3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696300 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1108 10:38:03.696306 1235505 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.195µs
	I1108 10:38:03.696311 1235505 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1108 10:38:03.696320 1235505 cache.go:107] acquiring lock: {Name:mkfd6f0a7827507a867318ffa03b1f88753d73c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696344 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1108 10:38:03.696432 1235505 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 110.199µs
	I1108 10:38:03.696467 1235505 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1108 10:38:03.696475 1235505 cache.go:87] Successfully saved all images to host disk.
	I1108 10:38:03.724692 1235505 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:38:03.724713 1235505 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:38:03.724726 1235505 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:38:03.724748 1235505 start.go:360] acquireMachinesLock for no-preload-291044: {Name:mkddf61b3e3a9309635e3814dcc2626dcf0ac06a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.724802 1235505 start.go:364] duration metric: took 39.794µs to acquireMachinesLock for "no-preload-291044"
	I1108 10:38:03.724827 1235505 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:38:03.724833 1235505 fix.go:54] fixHost starting: 
	I1108 10:38:03.725090 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:03.748518 1235505 fix.go:112] recreateIfNeeded on no-preload-291044: state=Stopped err=<nil>
	W1108 10:38:03.748550 1235505 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 10:37:59.450095 1234759 out.go:252] * Restarting existing docker container for "newest-cni-515571" ...
	I1108 10:37:59.450223 1234759 cli_runner.go:164] Run: docker start newest-cni-515571
	I1108 10:37:59.691563 1234759 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:37:59.715270 1234759 kic.go:430] container "newest-cni-515571" state is running.
	I1108 10:37:59.715681 1234759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:37:59.737607 1234759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/config.json ...
	I1108 10:37:59.737826 1234759 machine.go:94] provisionDockerMachine start ...
	I1108 10:37:59.737890 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:59.765889 1234759 main.go:143] libmachine: Using SSH client type: native
	I1108 10:37:59.766211 1234759 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1108 10:37:59.766220 1234759 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:37:59.767223 1234759 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:38:02.940329 1234759 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-515571
	
	I1108 10:38:02.940355 1234759 ubuntu.go:182] provisioning hostname "newest-cni-515571"
	I1108 10:38:02.940475 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:02.963657 1234759 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:02.964006 1234759 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1108 10:38:02.964019 1234759 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-515571 && echo "newest-cni-515571" | sudo tee /etc/hostname
	I1108 10:38:03.185627 1234759 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-515571
	
	I1108 10:38:03.185729 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:03.217977 1234759 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:03.218304 1234759 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1108 10:38:03.218323 1234759 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-515571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-515571/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-515571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:38:03.384925 1234759 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:38:03.384951 1234759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:38:03.384979 1234759 ubuntu.go:190] setting up certificates
	I1108 10:38:03.384995 1234759 provision.go:84] configureAuth start
	I1108 10:38:03.385072 1234759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:38:03.410074 1234759 provision.go:143] copyHostCerts
	I1108 10:38:03.410142 1234759 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:38:03.410168 1234759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:38:03.410246 1234759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:38:03.410345 1234759 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:38:03.410354 1234759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:38:03.410381 1234759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:38:03.410483 1234759 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:38:03.410494 1234759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:38:03.410525 1234759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:38:03.410580 1234759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.newest-cni-515571 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-515571]
	I1108 10:38:03.559473 1234759 provision.go:177] copyRemoteCerts
	I1108 10:38:03.559570 1234759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:38:03.559639 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:03.593848 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:03.708761 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:38:03.733719 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:38:03.750660 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:38:03.783507 1234759 provision.go:87] duration metric: took 398.490052ms to configureAuth
	I1108 10:38:03.783530 1234759 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:38:03.783725 1234759 config.go:182] Loaded profile config "newest-cni-515571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:03.783838 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:03.809496 1234759 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:03.809803 1234759 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1108 10:38:03.809817 1234759 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:38:04.187453 1234759 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:38:04.187482 1234759 machine.go:97] duration metric: took 4.449647113s to provisionDockerMachine
	I1108 10:38:04.187493 1234759 start.go:293] postStartSetup for "newest-cni-515571" (driver="docker")
	I1108 10:38:04.187504 1234759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:38:04.187577 1234759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:38:04.187629 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:04.212660 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:04.326544 1234759 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:38:04.330373 1234759 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:38:04.330400 1234759 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:38:04.330412 1234759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:38:04.330464 1234759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:38:04.330556 1234759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:38:04.330670 1234759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:38:04.340135 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:04.365878 1234759 start.go:296] duration metric: took 178.368566ms for postStartSetup
	I1108 10:38:04.365981 1234759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:38:04.366030 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:04.391616 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:04.499507 1234759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:38:04.508480 1234759 fix.go:56] duration metric: took 5.07691335s for fixHost
	I1108 10:38:04.508509 1234759 start.go:83] releasing machines lock for "newest-cni-515571", held for 5.076977504s
	I1108 10:38:04.508578 1234759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:38:04.538134 1234759 ssh_runner.go:195] Run: cat /version.json
	I1108 10:38:04.538202 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:04.539583 1234759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:38:04.539649 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:04.575008 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:04.576006 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:04.676229 1234759 ssh_runner.go:195] Run: systemctl --version
	I1108 10:38:04.767884 1234759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:38:04.804416 1234759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:38:04.809653 1234759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:38:04.809727 1234759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:38:04.817750 1234759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:38:04.817777 1234759 start.go:496] detecting cgroup driver to use...
	I1108 10:38:04.817830 1234759 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:38:04.817884 1234759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:38:04.833091 1234759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:38:04.846416 1234759 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:38:04.846502 1234759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:38:04.861634 1234759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:38:04.874846 1234759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:38:04.998578 1234759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:38:05.135300 1234759 docker.go:234] disabling docker service ...
	I1108 10:38:05.135437 1234759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:38:05.152326 1234759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:38:05.166183 1234759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:38:05.278069 1234759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:38:05.400328 1234759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:38:05.415907 1234759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:38:05.431051 1234759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:38:05.431140 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.440240 1234759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:38:05.440339 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.449494 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.458714 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.467275 1234759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:38:05.476936 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.492419 1234759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.503097 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.521816 1234759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:38:05.531770 1234759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:38:05.539998 1234759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:05.700180 1234759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:38:05.839628 1234759 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:38:05.839762 1234759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:38:05.844389 1234759 start.go:564] Will wait 60s for crictl version
	I1108 10:38:05.844504 1234759 ssh_runner.go:195] Run: which crictl
	I1108 10:38:05.848236 1234759 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:38:05.875294 1234759 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:38:05.875471 1234759 ssh_runner.go:195] Run: crio --version
	I1108 10:38:05.903310 1234759 ssh_runner.go:195] Run: crio --version
	I1108 10:38:05.935121 1234759 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:38:05.938080 1234759 cli_runner.go:164] Run: docker network inspect newest-cni-515571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:38:05.954105 1234759 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:38:05.958107 1234759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:05.971030 1234759 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 10:38:05.974017 1234759 kubeadm.go:884] updating cluster {Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:38:05.974157 1234759 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:05.974231 1234759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:38:06.011184 1234759 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:38:06.011213 1234759 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:38:06.011284 1234759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:38:06.043441 1234759 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:38:06.043466 1234759 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:38:06.043474 1234759 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:38:06.043570 1234759 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-515571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:38:06.043652 1234759 ssh_runner.go:195] Run: crio config
	I1108 10:38:06.125687 1234759 cni.go:84] Creating CNI manager for ""
	I1108 10:38:06.125714 1234759 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:38:06.125734 1234759 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 10:38:06.125758 1234759 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-515571 NodeName:newest-cni-515571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:38:06.125897 1234759 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-515571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:38:06.125970 1234759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:38:06.135272 1234759 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:38:06.135366 1234759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:38:06.143782 1234759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 10:38:06.156729 1234759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:38:06.170481 1234759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1108 10:38:06.183713 1234759 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:38:06.187405 1234759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:06.197241 1234759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:06.326024 1234759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:38:06.342698 1234759 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571 for IP: 192.168.76.2
	I1108 10:38:06.342766 1234759 certs.go:195] generating shared ca certs ...
	I1108 10:38:06.342798 1234759 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:06.342975 1234759 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:38:06.343059 1234759 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:38:06.343094 1234759 certs.go:257] generating profile certs ...
	I1108 10:38:06.343236 1234759 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.key
	I1108 10:38:06.343347 1234759 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key.0dbe4724
	I1108 10:38:06.343429 1234759 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key
	I1108 10:38:06.343595 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:38:06.343670 1234759 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:38:06.343696 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:38:06.343759 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:38:06.343816 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:38:06.343881 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:38:06.343945 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:06.344766 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:38:06.366263 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:38:06.386219 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:38:06.407337 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:38:06.427959 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 10:38:06.464755 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:38:06.485545 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:38:06.508576 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:38:06.531913 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:38:06.550951 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:38:06.569249 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:38:06.589084 1234759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:38:06.603103 1234759 ssh_runner.go:195] Run: openssl version
	I1108 10:38:06.609766 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:38:06.618968 1234759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:38:06.623128 1234759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:38:06.623190 1234759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:38:06.664747 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:38:06.672888 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:38:06.681238 1234759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:38:06.685102 1234759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:38:06.685164 1234759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:38:06.728077 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:38:06.735981 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:38:06.744071 1234759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:06.747486 1234759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:06.747585 1234759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:06.788799 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:38:06.796804 1234759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:38:06.800379 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:38:06.841720 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:38:06.882874 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:38:06.923707 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:38:06.972864 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:38:07.034706 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:38:07.128179 1234759 kubeadm.go:401] StartCluster: {Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:38:07.128288 1234759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:38:07.128360 1234759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:38:07.195611 1234759 cri.go:89] found id: "93d0c8e070cb668a489e9ad2a2665a4c28e5a124650ce0d95549a343c79037a0"
	I1108 10:38:07.195635 1234759 cri.go:89] found id: "38bc479dedc5ae4fd9d713123be920853a980f8e2e86f024661007578f58babe"
	I1108 10:38:07.195640 1234759 cri.go:89] found id: "02f8d0ac9dba3db69b485cb9b56006f12f108e27f5767ecbcca542963009eec6"
	I1108 10:38:07.195644 1234759 cri.go:89] found id: "8028c5744e9c2fa0cbfd055e941f992b8050ed81b1668d7cdfad5fcf592a4fea"
	I1108 10:38:07.195647 1234759 cri.go:89] found id: ""
	I1108 10:38:07.195702 1234759 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:38:07.218490 1234759 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:38:07Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:38:07.218565 1234759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:38:07.239766 1234759 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:38:07.239783 1234759 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:38:07.239833 1234759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:38:07.255042 1234759 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:38:07.255433 1234759 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-515571" does not appear in /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:07.255540 1234759 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-1027379/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-515571" cluster setting kubeconfig missing "newest-cni-515571" context setting]
	I1108 10:38:07.255817 1234759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:07.257372 1234759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:38:07.270863 1234759 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 10:38:07.270897 1234759 kubeadm.go:602] duration metric: took 31.108288ms to restartPrimaryControlPlane
	I1108 10:38:07.270906 1234759 kubeadm.go:403] duration metric: took 142.736548ms to StartCluster
	I1108 10:38:07.270920 1234759 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:07.270977 1234759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:07.277162 1234759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:07.277415 1234759 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:38:07.277785 1234759 config.go:182] Loaded profile config "newest-cni-515571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:07.277846 1234759 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:38:07.277979 1234759 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-515571"
	I1108 10:38:07.277998 1234759 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-515571"
	W1108 10:38:07.278013 1234759 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:38:07.278036 1234759 host.go:66] Checking if "newest-cni-515571" exists ...
	I1108 10:38:07.281120 1234759 addons.go:70] Setting dashboard=true in profile "newest-cni-515571"
	I1108 10:38:07.281147 1234759 addons.go:239] Setting addon dashboard=true in "newest-cni-515571"
	W1108 10:38:07.281155 1234759 addons.go:248] addon dashboard should already be in state true
	I1108 10:38:07.281184 1234759 host.go:66] Checking if "newest-cni-515571" exists ...
	I1108 10:38:07.281620 1234759 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:38:07.281805 1234759 addons.go:70] Setting default-storageclass=true in profile "newest-cni-515571"
	I1108 10:38:07.281833 1234759 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-515571"
	I1108 10:38:07.282123 1234759 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:38:07.283308 1234759 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:38:07.283803 1234759 out.go:179] * Verifying Kubernetes components...
	I1108 10:38:07.287497 1234759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:07.335251 1234759 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:38:07.338418 1234759 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:38:07.341492 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:38:07.341514 1234759 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:38:07.341588 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:07.341962 1234759 addons.go:239] Setting addon default-storageclass=true in "newest-cni-515571"
	W1108 10:38:07.341973 1234759 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:38:07.341997 1234759 host.go:66] Checking if "newest-cni-515571" exists ...
	I1108 10:38:07.342410 1234759 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:38:07.366431 1234759 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:38:03.753237 1235505 out.go:252] * Restarting existing docker container for "no-preload-291044" ...
	I1108 10:38:03.753329 1235505 cli_runner.go:164] Run: docker start no-preload-291044
	I1108 10:38:04.057467 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:04.087682 1235505 kic.go:430] container "no-preload-291044" state is running.
	I1108 10:38:04.088083 1235505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:38:04.114833 1235505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/config.json ...
	I1108 10:38:04.115063 1235505 machine.go:94] provisionDockerMachine start ...
	I1108 10:38:04.115137 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:04.142293 1235505 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:04.142603 1235505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34552 <nil> <nil>}
	I1108 10:38:04.142612 1235505 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:38:04.143286 1235505 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:38:07.342300 1235505 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-291044
	
	I1108 10:38:07.342316 1235505 ubuntu.go:182] provisioning hostname "no-preload-291044"
	I1108 10:38:07.342360 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:07.403464 1235505 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:07.403770 1235505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34552 <nil> <nil>}
	I1108 10:38:07.403781 1235505 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-291044 && echo "no-preload-291044" | sudo tee /etc/hostname
	I1108 10:38:07.641017 1235505 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-291044
	
	I1108 10:38:07.641108 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:07.685648 1235505 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:07.685949 1235505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34552 <nil> <nil>}
	I1108 10:38:07.685974 1235505 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-291044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-291044/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-291044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:38:07.870020 1235505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:38:07.870055 1235505 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:38:07.870085 1235505 ubuntu.go:190] setting up certificates
	I1108 10:38:07.870103 1235505 provision.go:84] configureAuth start
	I1108 10:38:07.870166 1235505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:38:07.895007 1235505 provision.go:143] copyHostCerts
	I1108 10:38:07.895076 1235505 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:38:07.895097 1235505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:38:07.895183 1235505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:38:07.895294 1235505 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:38:07.895306 1235505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:38:07.895334 1235505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:38:07.895397 1235505 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:38:07.895407 1235505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:38:07.895435 1235505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:38:07.895498 1235505 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.no-preload-291044 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-291044]
	I1108 10:38:08.248829 1235505 provision.go:177] copyRemoteCerts
	I1108 10:38:08.248947 1235505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:38:08.249019 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:08.279121 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:08.394578 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:38:08.433663 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:38:08.457735 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:38:07.372584 1234759 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:38:07.372610 1234759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:38:07.372682 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:07.437177 1234759 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:38:07.437199 1234759 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:38:07.437260 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:07.437843 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:07.464455 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:07.490160 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:07.753141 1234759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:38:07.795000 1234759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:38:07.829248 1234759 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:38:07.829321 1234759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:38:07.837723 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:38:07.837747 1234759 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:38:07.892998 1234759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:38:07.961494 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:38:07.961515 1234759 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:38:08.056606 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:38:08.056628 1234759 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:38:08.082389 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:38:08.082409 1234759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:38:08.154561 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:38:08.154591 1234759 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:38:08.265932 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:38:08.265961 1234759 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:38:08.327953 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:38:08.327973 1234759 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:38:08.354072 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:38:08.354095 1234759 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:38:08.374911 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:38:08.374934 1234759 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:38:08.401155 1234759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:38:08.489848 1235505 provision.go:87] duration metric: took 619.71888ms to configureAuth
	I1108 10:38:08.489877 1235505 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:38:08.490081 1235505 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:08.490199 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:08.524065 1235505 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:08.524369 1235505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34552 <nil> <nil>}
	I1108 10:38:08.524391 1235505 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:38:08.942666 1235505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:38:08.942736 1235505 machine.go:97] duration metric: took 4.827663169s to provisionDockerMachine
	I1108 10:38:08.942762 1235505 start.go:293] postStartSetup for "no-preload-291044" (driver="docker")
	I1108 10:38:08.942787 1235505 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:38:08.942897 1235505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:38:08.942973 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:08.975053 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:09.102411 1235505 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:38:09.109110 1235505 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:38:09.109135 1235505 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:38:09.109146 1235505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:38:09.109213 1235505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:38:09.109288 1235505 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:38:09.109389 1235505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:38:09.120992 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:09.150179 1235505 start.go:296] duration metric: took 207.389117ms for postStartSetup
	I1108 10:38:09.150309 1235505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:38:09.150379 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:09.178158 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:09.328172 1235505 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:38:09.337192 1235505 fix.go:56] duration metric: took 5.612351222s for fixHost
	I1108 10:38:09.337219 1235505 start.go:83] releasing machines lock for "no-preload-291044", held for 5.612402945s
	I1108 10:38:09.337291 1235505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:38:09.372792 1235505 ssh_runner.go:195] Run: cat /version.json
	I1108 10:38:09.372847 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:09.373098 1235505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:38:09.373155 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:09.420131 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:09.422826 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:09.676922 1235505 ssh_runner.go:195] Run: systemctl --version
	I1108 10:38:09.684076 1235505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:38:09.759160 1235505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:38:09.764696 1235505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:38:09.764823 1235505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:38:09.780927 1235505 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:38:09.781001 1235505 start.go:496] detecting cgroup driver to use...
	I1108 10:38:09.781046 1235505 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:38:09.781133 1235505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:38:09.804654 1235505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:38:09.822639 1235505 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:38:09.822748 1235505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:38:09.842720 1235505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:38:09.862106 1235505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:38:10.060756 1235505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:38:10.258324 1235505 docker.go:234] disabling docker service ...
	I1108 10:38:10.258398 1235505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:38:10.289695 1235505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:38:10.310524 1235505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:38:10.526912 1235505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:38:10.745187 1235505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:38:10.765740 1235505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:38:10.782315 1235505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:38:10.782432 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.797797 1235505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:38:10.797946 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.813701 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.828991 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.845923 1235505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:38:10.861767 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.877322 1235505 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.886622 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.901987 1235505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:38:10.914049 1235505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:38:10.921921 1235505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:11.142192 1235505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:38:11.390191 1235505 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:38:11.390331 1235505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:38:11.394481 1235505 start.go:564] Will wait 60s for crictl version
	I1108 10:38:11.394626 1235505 ssh_runner.go:195] Run: which crictl
	I1108 10:38:11.398920 1235505 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:38:11.439213 1235505 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:38:11.439387 1235505 ssh_runner.go:195] Run: crio --version
	I1108 10:38:11.483016 1235505 ssh_runner.go:195] Run: crio --version
	I1108 10:38:11.541753 1235505 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:38:11.544664 1235505 cli_runner.go:164] Run: docker network inspect no-preload-291044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:38:11.570953 1235505 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:38:11.575198 1235505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:11.587740 1235505 kubeadm.go:884] updating cluster {Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:38:11.587845 1235505 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:11.587885 1235505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:38:11.656851 1235505 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:38:11.656931 1235505 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:38:11.656954 1235505 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 10:38:11.657098 1235505 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-291044 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:38:11.657220 1235505 ssh_runner.go:195] Run: crio config
	I1108 10:38:11.785316 1235505 cni.go:84] Creating CNI manager for ""
	I1108 10:38:11.785384 1235505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:38:11.785420 1235505 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:38:11.785476 1235505 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-291044 NodeName:no-preload-291044 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:38:11.785648 1235505 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-291044"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:38:11.785756 1235505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:38:11.796961 1235505 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:38:11.797076 1235505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:38:11.806445 1235505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 10:38:11.826669 1235505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:38:11.844320 1235505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 10:38:11.876878 1235505 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:38:11.880554 1235505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:11.893106 1235505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:12.088690 1235505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:38:12.106356 1235505 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044 for IP: 192.168.85.2
	I1108 10:38:12.106379 1235505 certs.go:195] generating shared ca certs ...
	I1108 10:38:12.106394 1235505 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:12.106536 1235505 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:38:12.106585 1235505 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:38:12.106599 1235505 certs.go:257] generating profile certs ...
	I1108 10:38:12.106681 1235505 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.key
	I1108 10:38:12.106745 1235505 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key.e7c39ab7
	I1108 10:38:12.106785 1235505 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key
	I1108 10:38:12.106887 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:38:12.106919 1235505 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:38:12.106931 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:38:12.106958 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:38:12.106982 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:38:12.107013 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:38:12.107059 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:12.112564 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:38:12.177235 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:38:12.214067 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:38:12.244879 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:38:12.285111 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 10:38:12.325139 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:38:12.399632 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:38:12.457923 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:38:12.495066 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:38:12.543949 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:38:12.571475 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:38:12.595340 1235505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:38:12.610475 1235505 ssh_runner.go:195] Run: openssl version
	I1108 10:38:12.618186 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:38:12.627430 1235505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:12.632377 1235505 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:12.632523 1235505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:12.676985 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:38:12.686285 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:38:12.695613 1235505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:38:12.700061 1235505 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:38:12.700126 1235505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:38:12.748238 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:38:12.759189 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:38:12.768022 1235505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:38:12.775435 1235505 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:38:12.775504 1235505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:38:12.817853 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:38:12.830631 1235505 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:38:12.836260 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:38:12.897293 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:38:12.951287 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:38:13.020868 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:38:13.117327 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:38:13.188198 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:38:13.306290 1235505 kubeadm.go:401] StartCluster: {Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:38:13.306394 1235505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:38:13.306459 1235505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:38:13.423504 1235505 cri.go:89] found id: "5ff011c39fa1a4e6ccf1602407612d6fd09adb5c8853548d45cbc57693896266"
	I1108 10:38:13.423526 1235505 cri.go:89] found id: "99b5f6a8373260a1fb2a88d8f9ff8805d70fb0e4e09b4e2bea1c955d090e83a3"
	I1108 10:38:13.423531 1235505 cri.go:89] found id: ""
	I1108 10:38:13.423580 1235505 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:38:13.477357 1235505 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:38:13Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:38:13.477459 1235505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:38:13.520913 1235505 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:38:13.520935 1235505 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:38:13.520997 1235505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:38:13.565748 1235505 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:38:13.566317 1235505 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-291044" does not appear in /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:13.566567 1235505 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-1027379/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-291044" cluster setting kubeconfig missing "no-preload-291044" context setting]
	I1108 10:38:13.567022 1235505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:13.568417 1235505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:38:13.596855 1235505 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:38:13.596887 1235505 kubeadm.go:602] duration metric: took 75.945699ms to restartPrimaryControlPlane
	I1108 10:38:13.596897 1235505 kubeadm.go:403] duration metric: took 290.618895ms to StartCluster
	I1108 10:38:13.596916 1235505 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:13.596982 1235505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:13.597848 1235505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:13.598080 1235505 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:38:13.598434 1235505 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:13.598438 1235505 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:38:13.598561 1235505 addons.go:70] Setting storage-provisioner=true in profile "no-preload-291044"
	I1108 10:38:13.598570 1235505 addons.go:70] Setting dashboard=true in profile "no-preload-291044"
	I1108 10:38:13.598576 1235505 addons.go:239] Setting addon storage-provisioner=true in "no-preload-291044"
	W1108 10:38:13.598583 1235505 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:38:13.598590 1235505 addons.go:70] Setting default-storageclass=true in profile "no-preload-291044"
	I1108 10:38:13.598601 1235505 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-291044"
	I1108 10:38:13.598608 1235505 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:38:13.598890 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:13.599048 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:13.598583 1235505 addons.go:239] Setting addon dashboard=true in "no-preload-291044"
	W1108 10:38:13.600680 1235505 addons.go:248] addon dashboard should already be in state true
	I1108 10:38:13.600713 1235505 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:38:13.601157 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:13.606479 1235505 out.go:179] * Verifying Kubernetes components...
	I1108 10:38:13.609608 1235505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:13.641415 1235505 addons.go:239] Setting addon default-storageclass=true in "no-preload-291044"
	W1108 10:38:13.641439 1235505 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:38:13.641465 1235505 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:38:13.641881 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:13.658693 1235505 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:38:13.662164 1235505 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:38:13.662188 1235505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:38:13.662256 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:13.675023 1235505 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:38:13.675049 1235505 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:38:13.675129 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:13.682720 1235505 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:38:13.690538 1235505 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:38:17.749646 1234759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.954561881s)
	I1108 10:38:17.749711 1234759 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (9.920381239s)
	I1108 10:38:17.749724 1234759 api_server.go:72] duration metric: took 10.472273606s to wait for apiserver process to appear ...
	I1108 10:38:17.749729 1234759 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:38:17.749745 1234759 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:38:17.750070 1234759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.857049914s)
	I1108 10:38:17.750366 1234759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.349183287s)
	I1108 10:38:17.753293 1234759 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-515571 addons enable metrics-server
	
	I1108 10:38:17.778749 1234759 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:38:17.780023 1234759 api_server.go:141] control plane version: v1.34.1
	I1108 10:38:17.780048 1234759 api_server.go:131] duration metric: took 30.312096ms to wait for apiserver health ...
	I1108 10:38:17.780075 1234759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:38:17.786382 1234759 system_pods.go:59] 8 kube-system pods found
	I1108 10:38:17.786421 1234759 system_pods.go:61] "coredns-66bc5c9577-tzpcv" [e29d787c-07fa-45a9-8486-67e87bde431e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 10:38:17.786431 1234759 system_pods.go:61] "etcd-newest-cni-515571" [5340f708-b23d-4f0b-bda7-995b964333e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:38:17.786437 1234759 system_pods.go:61] "kindnet-6vtjh" [69f8e634-a5cb-438a-a6ac-5762a43d39e5] Running
	I1108 10:38:17.786445 1234759 system_pods.go:61] "kube-apiserver-newest-cni-515571" [82e0acec-a5e0-43ed-b26f-072f360ced86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:38:17.786461 1234759 system_pods.go:61] "kube-controller-manager-newest-cni-515571" [3966d3a4-3fac-4d01-858a-27ad292e0b25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:38:17.786470 1234759 system_pods.go:61] "kube-proxy-cqlhl" [0385ed05-d22d-4bb0-b165-eeb7226e70fd] Running
	I1108 10:38:17.786478 1234759 system_pods.go:61] "kube-scheduler-newest-cni-515571" [6e339344-0ff3-412a-b78f-55ef23e04a9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:38:17.786489 1234759 system_pods.go:61] "storage-provisioner" [db0e8015-0d1b-4030-ad64-744fe3afd379] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 10:38:17.786496 1234759 system_pods.go:74] duration metric: took 6.414219ms to wait for pod list to return data ...
	I1108 10:38:17.786510 1234759 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:38:17.798309 1234759 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 10:38:17.798781 1234759 default_sa.go:45] found service account: "default"
	I1108 10:38:17.798805 1234759 default_sa.go:55] duration metric: took 12.287661ms for default service account to be created ...
	I1108 10:38:17.798818 1234759 kubeadm.go:587] duration metric: took 10.521365716s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 10:38:17.798837 1234759 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:38:17.801047 1234759 addons.go:515] duration metric: took 10.523195791s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 10:38:17.805520 1234759 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:38:17.805567 1234759 node_conditions.go:123] node cpu capacity is 2
	I1108 10:38:17.805580 1234759 node_conditions.go:105] duration metric: took 6.73754ms to run NodePressure ...
	I1108 10:38:17.805591 1234759 start.go:242] waiting for startup goroutines ...
	I1108 10:38:17.805599 1234759 start.go:247] waiting for cluster config update ...
	I1108 10:38:17.805611 1234759 start.go:256] writing updated cluster config ...
	I1108 10:38:17.805904 1234759 ssh_runner.go:195] Run: rm -f paused
	I1108 10:38:17.906779 1234759 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:38:17.909800 1234759 out.go:179] * Done! kubectl is now configured to use "newest-cni-515571" cluster and "default" namespace by default
	I1108 10:38:13.693458 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:38:13.693481 1235505 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:38:13.693556 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:13.724704 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:13.725365 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:13.750023 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:14.151273 1235505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:38:14.173840 1235505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:38:14.199409 1235505 node_ready.go:35] waiting up to 6m0s for node "no-preload-291044" to be "Ready" ...
	I1108 10:38:14.250703 1235505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:38:14.265538 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:38:14.265559 1235505 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:38:14.336710 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:38:14.336731 1235505 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:38:14.456827 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:38:14.456849 1235505 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:38:14.599095 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:38:14.599159 1235505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:38:14.761828 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:38:14.761894 1235505 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:38:14.813101 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:38:14.813168 1235505 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:38:14.850022 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:38:14.850097 1235505 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:38:14.876116 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:38:14.876180 1235505 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:38:14.920809 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:38:14.920873 1235505 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:38:14.956980 1235505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.352929715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.364622448Z" level=info msg="Running pod sandbox: kube-system/kindnet-6vtjh/POD" id=730897a0-2964-4dcd-9f19-56ec7b64390b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.364709247Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.405175988Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ae8b5de0-459a-4110-a171-d946ab05e2ae name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.412107968Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=730897a0-2964-4dcd-9f19-56ec7b64390b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.441864101Z" level=info msg="Ran pod sandbox f466d952445e030965d8f99dc737b37e80a0d29dc6c09b577cb2c582f76cdd54 with infra container: kube-system/kube-proxy-cqlhl/POD" id=ae8b5de0-459a-4110-a171-d946ab05e2ae name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.453799617Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3d2cbd81-eb68-4be8-aa0d-02186e296f24 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.460388476Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a91eb870-49f5-4db7-81f0-036fdc259d26 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.465733073Z" level=info msg="Creating container: kube-system/kube-proxy-cqlhl/kube-proxy" id=9005c773-b31b-4598-a527-aafa63386d1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.465834026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.483236301Z" level=info msg="Ran pod sandbox 8f2355aaa4c3e2bc4dc78326912d7f7bae1deef7b8b08ef39d1f300d55f0a4b2 with infra container: kube-system/kindnet-6vtjh/POD" id=730897a0-2964-4dcd-9f19-56ec7b64390b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.49560103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.496174683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.507527397Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=de701164-57f0-4312-90bf-dd07925a4cbe name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.518457831Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7b96f13c-7f45-4325-af22-0a67b8bdb7d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.548121708Z" level=info msg="Creating container: kube-system/kindnet-6vtjh/kindnet-cni" id=8ab0e51c-a1e8-4c3a-9551-865c949cc894 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.548215695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.561070626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.565602381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.608215208Z" level=info msg="Created container aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345: kube-system/kube-proxy-cqlhl/kube-proxy" id=9005c773-b31b-4598-a527-aafa63386d1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.609380384Z" level=info msg="Starting container: aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345" id=10a64431-9de8-439b-b134-60928d60effc name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.612090302Z" level=info msg="Started container" PID=1058 containerID=aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345 description=kube-system/kube-proxy-cqlhl/kube-proxy id=10a64431-9de8-439b-b134-60928d60effc name=/runtime.v1.RuntimeService/StartContainer sandboxID=f466d952445e030965d8f99dc737b37e80a0d29dc6c09b577cb2c582f76cdd54
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.686085635Z" level=info msg="Created container 93511cf560575ebe917dce5846ff27235243c676c68ce71935565137b991bee0: kube-system/kindnet-6vtjh/kindnet-cni" id=8ab0e51c-a1e8-4c3a-9551-865c949cc894 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.686974084Z" level=info msg="Starting container: 93511cf560575ebe917dce5846ff27235243c676c68ce71935565137b991bee0" id=235a1066-2544-48e6-b5bf-1178b13df11a name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.690252822Z" level=info msg="Started container" PID=1068 containerID=93511cf560575ebe917dce5846ff27235243c676c68ce71935565137b991bee0 description=kube-system/kindnet-6vtjh/kindnet-cni id=235a1066-2544-48e6-b5bf-1178b13df11a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f2355aaa4c3e2bc4dc78326912d7f7bae1deef7b8b08ef39d1f300d55f0a4b2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	93511cf560575       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   8f2355aaa4c3e       kindnet-6vtjh                               kube-system
	aa314f8fce25c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   f466d952445e0       kube-proxy-cqlhl                            kube-system
	93d0c8e070cb6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      1                   b24ae7dd8a615       etcd-newest-cni-515571                      kube-system
	38bc479dedc5a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            1                   fa232d3318380       kube-scheduler-newest-cni-515571            kube-system
	02f8d0ac9dba3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   1                   995be64be113b       kube-controller-manager-newest-cni-515571   kube-system
	8028c5744e9c2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            1                   20e3366c89337       kube-apiserver-newest-cni-515571            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-515571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-515571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=newest-cni-515571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_37_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:37:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-515571
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:38:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:38:14 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:38:14 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:38:14 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 10:38:14 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-515571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                da96ae8e-28b2-4384-8ee4-16fe0d13fbbb
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-515571                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-6vtjh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-515571             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-515571    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-cqlhl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-515571             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientPID     34s                kubelet          Node newest-cni-515571 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node newest-cni-515571 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-515571 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-515571 event: Registered Node newest-cni-515571 in Controller
	  Normal   Starting                 17s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17s (x8 over 17s)  kubelet          Node newest-cni-515571 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node newest-cni-515571 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x8 over 17s)  kubelet          Node newest-cni-515571 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-515571 event: Registered Node newest-cni-515571 in Controller
	
	
	==> dmesg <==
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:36] overlayfs: idmapped layers are currently not supported
	[ +30.788294] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:38] overlayfs: idmapped layers are currently not supported
	[  +6.100629] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [93d0c8e070cb668a489e9ad2a2665a4c28e5a124650ce0d95549a343c79037a0] <==
	{"level":"warn","ts":"2025-11-08T10:38:11.889884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:11.904727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:11.923301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:11.959937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:11.988964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.005700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.061530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.078088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.113153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.161154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.206087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.238546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.256335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.284171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.318650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.336003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.379211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.409527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.438668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.464499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.517060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.556351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.577738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.586582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.770204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60682","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:38:23 up  9:20,  0 user,  load average: 6.52, 4.45, 3.38
	Linux newest-cni-515571 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [93511cf560575ebe917dce5846ff27235243c676c68ce71935565137b991bee0] <==
	I1108 10:38:16.819866       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:38:16.820259       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:38:16.821072       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:38:16.821138       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:38:16.821172       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:38:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:38:17.014069       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:38:17.014160       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:38:17.014193       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:38:17.016260       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [8028c5744e9c2fa0cbfd055e941f992b8050ed81b1668d7cdfad5fcf592a4fea] <==
	I1108 10:38:14.630045       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 10:38:14.647327       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:38:14.647401       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:38:14.647474       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 10:38:14.647508       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:38:14.661277       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:38:14.661311       1 policy_source.go:240] refreshing policies
	I1108 10:38:14.661962       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:38:14.681494       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:38:14.682145       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:38:14.682158       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:38:14.683598       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1108 10:38:14.814235       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:38:15.174313       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:38:16.159179       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:38:16.933701       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:38:17.033836       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:38:17.083005       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:38:17.094758       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:38:17.240811       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.189.32"}
	I1108 10:38:17.300846       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.237.151"}
	I1108 10:38:19.087286       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:38:19.202050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:38:19.260248       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:38:19.340975       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [02f8d0ac9dba3db69b485cb9b56006f12f108e27f5767ecbcca542963009eec6] <==
	I1108 10:38:18.888535       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:38:18.890924       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:38:18.891183       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 10:38:18.897127       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 10:38:18.900865       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:38:18.905381       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:38:18.905774       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:38:18.925418       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:38:18.925661       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:38:18.951684       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:38:18.960742       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 10:38:18.960789       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:38:18.974722       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:38:18.975098       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:38:18.975352       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-515571"
	I1108 10:38:18.975467       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 10:38:18.976153       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:38:18.976458       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:38:18.976475       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:38:18.976482       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:38:18.985117       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:38:18.996195       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:38:18.996387       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:38:19.000599       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:38:19.004007       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345] <==
	I1108 10:38:17.533333       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:38:17.978374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:38:18.079425       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:38:18.079546       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:38:18.079673       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:38:18.251606       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:38:18.251667       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:38:18.262913       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:38:18.263255       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:38:18.263279       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:38:18.276242       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:38:18.276268       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:38:18.276619       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:38:18.276696       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:38:18.278433       1 config.go:200] "Starting service config controller"
	I1108 10:38:18.278453       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:38:18.278526       1 config.go:309] "Starting node config controller"
	I1108 10:38:18.278536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:38:18.278542       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:38:18.380163       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:38:18.380232       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 10:38:18.380513       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [38bc479dedc5ae4fd9d713123be920853a980f8e2e86f024661007578f58babe] <==
	I1108 10:38:12.147391       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:38:16.564851       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:38:16.564883       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:38:16.631524       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:38:16.631634       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:38:16.631655       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:38:16.631678       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:38:16.633973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:38:16.633987       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:38:16.634006       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:38:16.634043       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:38:16.732126       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:38:16.737774       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:38:16.737882       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.575339     725 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:newest-cni-515571\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-515571' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.575377     725 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-newest-cni-515571\" is forbidden: User \"system:node:newest-cni-515571\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-515571' and this object" podUID="2ae1802e0a9b46b324af26050bccbc9a" pod="kube-system/kube-scheduler-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.616663     725 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-cqlhl\" is forbidden: User \"system:node:newest-cni-515571\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-515571' and this object" podUID="0385ed05-d22d-4bb0-b165-eeb7226e70fd" pod="kube-system/kube-proxy-cqlhl"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.754409     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-515571\" already exists" pod="kube-system/kube-scheduler-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.754447     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.819816     725 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.819907     725 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.819932     725 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.824101     725 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.861123     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-515571\" already exists" pod="kube-system/etcd-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.861160     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.915788     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-515571\" already exists" pod="kube-system/kube-apiserver-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.915822     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.970184     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-515571\" already exists" pod="kube-system/kube-controller-manager-newest-cni-515571"
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577474     725 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577513     725 projected.go:196] Error preparing data for projected volume kube-api-access-28jx7 for pod kube-system/kindnet-6vtjh: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-515571" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-515571' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577596     725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69f8e634-a5cb-438a-a6ac-5762a43d39e5-kube-api-access-28jx7 podName:69f8e634-a5cb-438a-a6ac-5762a43d39e5 nodeName:}" failed. No retries permitted until 2025-11-08 10:38:16.07756959 +0000 UTC m=+9.734043952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-28jx7" (UniqueName: "kubernetes.io/projected/69f8e634-a5cb-438a-a6ac-5762a43d39e5-kube-api-access-28jx7") pod "kindnet-6vtjh" (UID: "69f8e634-a5cb-438a-a6ac-5762a43d39e5") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-515571" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-515571' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577639     725 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577653     725 projected.go:196] Error preparing data for projected volume kube-api-access-k45bh for pod kube-system/kube-proxy-cqlhl: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-515571" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-515571' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577688     725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0385ed05-d22d-4bb0-b165-eeb7226e70fd-kube-api-access-k45bh podName:0385ed05-d22d-4bb0-b165-eeb7226e70fd nodeName:}" failed. No retries permitted until 2025-11-08 10:38:16.07767787 +0000 UTC m=+9.734152232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k45bh" (UniqueName: "kubernetes.io/projected/0385ed05-d22d-4bb0-b165-eeb7226e70fd-kube-api-access-k45bh") pod "kube-proxy-cqlhl" (UID: "0385ed05-d22d-4bb0-b165-eeb7226e70fd") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-515571" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-515571' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Nov 08 10:38:16 newest-cni-515571 kubelet[725]: I1108 10:38:16.220606     725 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:38:16 newest-cni-515571 kubelet[725]: E1108 10:38:16.584850     725 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/crio/crio-aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345\": RecentStats: unable to find data in memory cache]"
	Nov 08 10:38:19 newest-cni-515571 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:38:19 newest-cni-515571 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:38:19 newest-cni-515571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-515571 -n newest-cni-515571
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-515571 -n newest-cni-515571: exit status 2 (403.653708ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-515571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-tzpcv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-2hmg6 kubernetes-dashboard-855c9754f9-ngdnl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-515571 describe pod coredns-66bc5c9577-tzpcv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-2hmg6 kubernetes-dashboard-855c9754f9-ngdnl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-515571 describe pod coredns-66bc5c9577-tzpcv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-2hmg6 kubernetes-dashboard-855c9754f9-ngdnl: exit status 1 (88.121859ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-tzpcv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-2hmg6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-ngdnl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-515571 describe pod coredns-66bc5c9577-tzpcv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-2hmg6 kubernetes-dashboard-855c9754f9-ngdnl: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-515571
helpers_test.go:243: (dbg) docker inspect newest-cni-515571:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d",
	        "Created": "2025-11-08T10:37:25.283274548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1234885,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:37:59.480132826Z",
	            "FinishedAt": "2025-11-08T10:37:58.532191531Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/hosts",
	        "LogPath": "/var/lib/docker/containers/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d-json.log",
	        "Name": "/newest-cni-515571",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-515571:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-515571",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d",
	                "LowerDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223/merged",
	                "UpperDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223/diff",
	                "WorkDir": "/var/lib/docker/overlay2/643cda8bf3049281e34e98268848f9f3c9834427bb523f4bb3df251a35ded223/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-515571",
	                "Source": "/var/lib/docker/volumes/newest-cni-515571/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-515571",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-515571",
	                "name.minikube.sigs.k8s.io": "newest-cni-515571",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebdeed8d0947e803b687dc81d80784544dc38bc1cb0503c1592f4d39912e5df2",
	            "SandboxKey": "/var/run/docker/netns/ebdeed8d0947",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34547"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34548"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34551"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34549"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34550"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-515571": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:19:de:c7:07:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e044b4554ec93678a97772c9b706896f0ba13332a99b10f9f482de6020b370fa",
	                    "EndpointID": "699767b60e0d101f6eb70897e8c0ede638428cc09776a63fab1a0f38f2e901c6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-515571",
	                        "f94bf5ad2ae9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-515571 -n newest-cni-515571
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-515571 -n newest-cni-515571: exit status 2 (383.440801ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-515571 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-515571 logs -n 25: (1.200747975s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-790346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-790346 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-790346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:35 UTC │ 08 Nov 25 10:36 UTC │
	│ image   │ default-k8s-diff-port-236075 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ pause   │ -p default-k8s-diff-port-236075 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-553553                                                                                                                                                                                                               │ disable-driver-mounts-553553 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:37 UTC │
	│ image   │ embed-certs-790346 image list --format=json                                                                                                                                                                                                   │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-790346 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-291044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ stop    │ -p no-preload-291044 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p newest-cni-515571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ stop    │ -p newest-cni-515571 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-515571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:38 UTC │
	│ addons  │ enable dashboard -p no-preload-291044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │                     │
	│ image   │ newest-cni-515571 image list --format=json                                                                                                                                                                                                    │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-515571 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:38:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:38:03.479591 1235505 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:38:03.480146 1235505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:38:03.480181 1235505 out.go:374] Setting ErrFile to fd 2...
	I1108 10:38:03.480203 1235505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:38:03.480508 1235505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:38:03.480906 1235505 out.go:368] Setting JSON to false
	I1108 10:38:03.481802 1235505 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33629,"bootTime":1762564655,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:38:03.481899 1235505 start.go:143] virtualization:  
	I1108 10:38:03.486959 1235505 out.go:179] * [no-preload-291044] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:38:03.490063 1235505 notify.go:221] Checking for updates...
	I1108 10:38:03.490981 1235505 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:38:03.493800 1235505 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:38:03.496614 1235505 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:03.499540 1235505 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:38:03.502539 1235505 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:38:03.505348 1235505 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:38:03.508688 1235505 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:03.509246 1235505 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:38:03.541902 1235505 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:38:03.542015 1235505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:38:03.610704 1235505 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:38:03.600157627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:38:03.610805 1235505 docker.go:319] overlay module found
	I1108 10:38:03.614031 1235505 out.go:179] * Using the docker driver based on existing profile
	I1108 10:38:03.616858 1235505 start.go:309] selected driver: docker
	I1108 10:38:03.616877 1235505 start.go:930] validating driver "docker" against &{Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:38:03.616982 1235505 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:38:03.617660 1235505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:38:03.682872 1235505 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:38:03.673713347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:38:03.683210 1235505 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:38:03.683244 1235505 cni.go:84] Creating CNI manager for ""
	I1108 10:38:03.683299 1235505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:38:03.683343 1235505 start.go:353] cluster config:
	{Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:38:03.686538 1235505 out.go:179] * Starting "no-preload-291044" primary control-plane node in "no-preload-291044" cluster
	I1108 10:38:03.689364 1235505 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:38:03.692296 1235505 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:38:03.695162 1235505 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:03.695306 1235505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/config.json ...
	I1108 10:38:03.695658 1235505 cache.go:107] acquiring lock: {Name:mk8513c6159258582048bf022eb3626495f0ef99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.695747 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 10:38:03.695762 1235505 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.788µs
	I1108 10:38:03.695770 1235505 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 10:38:03.695785 1235505 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:38:03.695983 1235505 cache.go:107] acquiring lock: {Name:mkc673276c059e1336edcaed46b38c8432a558c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696048 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1108 10:38:03.696056 1235505 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 78.414µs
	I1108 10:38:03.696063 1235505 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1108 10:38:03.696083 1235505 cache.go:107] acquiring lock: {Name:mkfbe116f289c09e7f023243a3e334812266f562 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696120 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1108 10:38:03.696125 1235505 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 52.479µs
	I1108 10:38:03.696131 1235505 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1108 10:38:03.696141 1235505 cache.go:107] acquiring lock: {Name:mkab778ec210a01a148a027551ae4dd6f48ac681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696168 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1108 10:38:03.696173 1235505 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 33.706µs
	I1108 10:38:03.696179 1235505 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1108 10:38:03.696187 1235505 cache.go:107] acquiring lock: {Name:mk7e5c4997cde36ed0e08a0661a5a5dfada4e032 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696212 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1108 10:38:03.696217 1235505 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.769µs
	I1108 10:38:03.696223 1235505 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1108 10:38:03.696233 1235505 cache.go:107] acquiring lock: {Name:mkde9e8ad2f329aff2c9e641a9eec6a25ba01057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696257 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1108 10:38:03.696262 1235505 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 29.743µs
	I1108 10:38:03.696267 1235505 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1108 10:38:03.696275 1235505 cache.go:107] acquiring lock: {Name:mk0c87ccf4c259c637cc851ae066ca5ca4e4afa3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696300 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1108 10:38:03.696306 1235505 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.195µs
	I1108 10:38:03.696311 1235505 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1108 10:38:03.696320 1235505 cache.go:107] acquiring lock: {Name:mkfd6f0a7827507a867318ffa03b1f88753d73c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.696344 1235505 cache.go:115] /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1108 10:38:03.696432 1235505 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 110.199µs
	I1108 10:38:03.696467 1235505 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1108 10:38:03.696475 1235505 cache.go:87] Successfully saved all images to host disk.
	I1108 10:38:03.724692 1235505 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:38:03.724713 1235505 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:38:03.724726 1235505 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:38:03.724748 1235505 start.go:360] acquireMachinesLock for no-preload-291044: {Name:mkddf61b3e3a9309635e3814dcc2626dcf0ac06a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:03.724802 1235505 start.go:364] duration metric: took 39.794µs to acquireMachinesLock for "no-preload-291044"
	I1108 10:38:03.724827 1235505 start.go:96] Skipping create...Using existing machine configuration
	I1108 10:38:03.724833 1235505 fix.go:54] fixHost starting: 
	I1108 10:38:03.725090 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:03.748518 1235505 fix.go:112] recreateIfNeeded on no-preload-291044: state=Stopped err=<nil>
	W1108 10:38:03.748550 1235505 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 10:37:59.450095 1234759 out.go:252] * Restarting existing docker container for "newest-cni-515571" ...
	I1108 10:37:59.450223 1234759 cli_runner.go:164] Run: docker start newest-cni-515571
	I1108 10:37:59.691563 1234759 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:37:59.715270 1234759 kic.go:430] container "newest-cni-515571" state is running.
	I1108 10:37:59.715681 1234759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:37:59.737607 1234759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/config.json ...
	I1108 10:37:59.737826 1234759 machine.go:94] provisionDockerMachine start ...
	I1108 10:37:59.737890 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:37:59.765889 1234759 main.go:143] libmachine: Using SSH client type: native
	I1108 10:37:59.766211 1234759 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1108 10:37:59.766220 1234759 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:37:59.767223 1234759 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:38:02.940329 1234759 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-515571
	
	I1108 10:38:02.940355 1234759 ubuntu.go:182] provisioning hostname "newest-cni-515571"
	I1108 10:38:02.940475 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:02.963657 1234759 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:02.964006 1234759 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1108 10:38:02.964019 1234759 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-515571 && echo "newest-cni-515571" | sudo tee /etc/hostname
	I1108 10:38:03.185627 1234759 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-515571
	
	I1108 10:38:03.185729 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:03.217977 1234759 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:03.218304 1234759 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1108 10:38:03.218323 1234759 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-515571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-515571/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-515571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:38:03.384925 1234759 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:38:03.384951 1234759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:38:03.384979 1234759 ubuntu.go:190] setting up certificates
	I1108 10:38:03.384995 1234759 provision.go:84] configureAuth start
	I1108 10:38:03.385072 1234759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:38:03.410074 1234759 provision.go:143] copyHostCerts
	I1108 10:38:03.410142 1234759 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:38:03.410168 1234759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:38:03.410246 1234759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:38:03.410345 1234759 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:38:03.410354 1234759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:38:03.410381 1234759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:38:03.410483 1234759 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:38:03.410494 1234759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:38:03.410525 1234759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:38:03.410580 1234759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.newest-cni-515571 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-515571]
	I1108 10:38:03.559473 1234759 provision.go:177] copyRemoteCerts
	I1108 10:38:03.559570 1234759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:38:03.559639 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:03.593848 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:03.708761 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:38:03.733719 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:38:03.750660 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:38:03.783507 1234759 provision.go:87] duration metric: took 398.490052ms to configureAuth
	I1108 10:38:03.783530 1234759 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:38:03.783725 1234759 config.go:182] Loaded profile config "newest-cni-515571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:03.783838 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:03.809496 1234759 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:03.809803 1234759 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1108 10:38:03.809817 1234759 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:38:04.187453 1234759 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:38:04.187482 1234759 machine.go:97] duration metric: took 4.449647113s to provisionDockerMachine
	I1108 10:38:04.187493 1234759 start.go:293] postStartSetup for "newest-cni-515571" (driver="docker")
	I1108 10:38:04.187504 1234759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:38:04.187577 1234759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:38:04.187629 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:04.212660 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:04.326544 1234759 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:38:04.330373 1234759 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:38:04.330400 1234759 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:38:04.330412 1234759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:38:04.330464 1234759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:38:04.330556 1234759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:38:04.330670 1234759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:38:04.340135 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:04.365878 1234759 start.go:296] duration metric: took 178.368566ms for postStartSetup
	I1108 10:38:04.365981 1234759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:38:04.366030 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:04.391616 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:04.499507 1234759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:38:04.508480 1234759 fix.go:56] duration metric: took 5.07691335s for fixHost
	I1108 10:38:04.508509 1234759 start.go:83] releasing machines lock for "newest-cni-515571", held for 5.076977504s
	I1108 10:38:04.508578 1234759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-515571
	I1108 10:38:04.538134 1234759 ssh_runner.go:195] Run: cat /version.json
	I1108 10:38:04.538202 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:04.539583 1234759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:38:04.539649 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:04.575008 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:04.576006 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:04.676229 1234759 ssh_runner.go:195] Run: systemctl --version
	I1108 10:38:04.767884 1234759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:38:04.804416 1234759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:38:04.809653 1234759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:38:04.809727 1234759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:38:04.817750 1234759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:38:04.817777 1234759 start.go:496] detecting cgroup driver to use...
	I1108 10:38:04.817830 1234759 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:38:04.817884 1234759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:38:04.833091 1234759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:38:04.846416 1234759 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:38:04.846502 1234759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:38:04.861634 1234759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:38:04.874846 1234759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:38:04.998578 1234759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:38:05.135300 1234759 docker.go:234] disabling docker service ...
	I1108 10:38:05.135437 1234759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:38:05.152326 1234759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:38:05.166183 1234759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:38:05.278069 1234759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:38:05.400328 1234759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:38:05.415907 1234759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:38:05.431051 1234759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:38:05.431140 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.440240 1234759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:38:05.440339 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.449494 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.458714 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.467275 1234759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:38:05.476936 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.492419 1234759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.503097 1234759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:05.521816 1234759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:38:05.531770 1234759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:38:05.539998 1234759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:05.700180 1234759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:38:05.839628 1234759 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:38:05.839762 1234759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:38:05.844389 1234759 start.go:564] Will wait 60s for crictl version
	I1108 10:38:05.844504 1234759 ssh_runner.go:195] Run: which crictl
	I1108 10:38:05.848236 1234759 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:38:05.875294 1234759 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:38:05.875471 1234759 ssh_runner.go:195] Run: crio --version
	I1108 10:38:05.903310 1234759 ssh_runner.go:195] Run: crio --version
	I1108 10:38:05.935121 1234759 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:38:05.938080 1234759 cli_runner.go:164] Run: docker network inspect newest-cni-515571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:38:05.954105 1234759 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:38:05.958107 1234759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:05.971030 1234759 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 10:38:05.974017 1234759 kubeadm.go:884] updating cluster {Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:38:05.974157 1234759 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:05.974231 1234759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:38:06.011184 1234759 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:38:06.011213 1234759 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:38:06.011284 1234759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:38:06.043441 1234759 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:38:06.043466 1234759 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:38:06.043474 1234759 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:38:06.043570 1234759 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-515571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:38:06.043652 1234759 ssh_runner.go:195] Run: crio config
	I1108 10:38:06.125687 1234759 cni.go:84] Creating CNI manager for ""
	I1108 10:38:06.125714 1234759 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:38:06.125734 1234759 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 10:38:06.125758 1234759 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-515571 NodeName:newest-cni-515571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:38:06.125897 1234759 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-515571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:38:06.125970 1234759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:38:06.135272 1234759 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:38:06.135366 1234759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:38:06.143782 1234759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 10:38:06.156729 1234759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:38:06.170481 1234759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1108 10:38:06.183713 1234759 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:38:06.187405 1234759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:06.197241 1234759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:06.326024 1234759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:38:06.342698 1234759 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571 for IP: 192.168.76.2
	I1108 10:38:06.342766 1234759 certs.go:195] generating shared ca certs ...
	I1108 10:38:06.342798 1234759 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:06.342975 1234759 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:38:06.343059 1234759 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:38:06.343094 1234759 certs.go:257] generating profile certs ...
	I1108 10:38:06.343236 1234759 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/client.key
	I1108 10:38:06.343347 1234759 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key.0dbe4724
	I1108 10:38:06.343429 1234759 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key
	I1108 10:38:06.343595 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:38:06.343670 1234759 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:38:06.343696 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:38:06.343759 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:38:06.343816 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:38:06.343881 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:38:06.343945 1234759 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:06.344766 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:38:06.366263 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:38:06.386219 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:38:06.407337 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:38:06.427959 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 10:38:06.464755 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:38:06.485545 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:38:06.508576 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/newest-cni-515571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:38:06.531913 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:38:06.550951 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:38:06.569249 1234759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:38:06.589084 1234759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:38:06.603103 1234759 ssh_runner.go:195] Run: openssl version
	I1108 10:38:06.609766 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:38:06.618968 1234759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:38:06.623128 1234759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:38:06.623190 1234759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:38:06.664747 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:38:06.672888 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:38:06.681238 1234759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:38:06.685102 1234759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:38:06.685164 1234759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:38:06.728077 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:38:06.735981 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:38:06.744071 1234759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:06.747486 1234759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:06.747585 1234759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:06.788799 1234759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:38:06.796804 1234759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:38:06.800379 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:38:06.841720 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:38:06.882874 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:38:06.923707 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:38:06.972864 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:38:07.034706 1234759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:38:07.128179 1234759 kubeadm.go:401] StartCluster: {Name:newest-cni-515571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-515571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:38:07.128288 1234759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:38:07.128360 1234759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:38:07.195611 1234759 cri.go:89] found id: "93d0c8e070cb668a489e9ad2a2665a4c28e5a124650ce0d95549a343c79037a0"
	I1108 10:38:07.195635 1234759 cri.go:89] found id: "38bc479dedc5ae4fd9d713123be920853a980f8e2e86f024661007578f58babe"
	I1108 10:38:07.195640 1234759 cri.go:89] found id: "02f8d0ac9dba3db69b485cb9b56006f12f108e27f5767ecbcca542963009eec6"
	I1108 10:38:07.195644 1234759 cri.go:89] found id: "8028c5744e9c2fa0cbfd055e941f992b8050ed81b1668d7cdfad5fcf592a4fea"
	I1108 10:38:07.195647 1234759 cri.go:89] found id: ""
	I1108 10:38:07.195702 1234759 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:38:07.218490 1234759 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:38:07Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:38:07.218565 1234759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:38:07.239766 1234759 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:38:07.239783 1234759 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:38:07.239833 1234759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:38:07.255042 1234759 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:38:07.255433 1234759 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-515571" does not appear in /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:07.255540 1234759 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-1027379/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-515571" cluster setting kubeconfig missing "newest-cni-515571" context setting]
	I1108 10:38:07.255817 1234759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:07.257372 1234759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:38:07.270863 1234759 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 10:38:07.270897 1234759 kubeadm.go:602] duration metric: took 31.108288ms to restartPrimaryControlPlane
	I1108 10:38:07.270906 1234759 kubeadm.go:403] duration metric: took 142.736548ms to StartCluster
	I1108 10:38:07.270920 1234759 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:07.270977 1234759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:07.277162 1234759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:07.277415 1234759 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:38:07.277785 1234759 config.go:182] Loaded profile config "newest-cni-515571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:07.277846 1234759 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:38:07.277979 1234759 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-515571"
	I1108 10:38:07.277998 1234759 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-515571"
	W1108 10:38:07.278013 1234759 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:38:07.278036 1234759 host.go:66] Checking if "newest-cni-515571" exists ...
	I1108 10:38:07.281120 1234759 addons.go:70] Setting dashboard=true in profile "newest-cni-515571"
	I1108 10:38:07.281147 1234759 addons.go:239] Setting addon dashboard=true in "newest-cni-515571"
	W1108 10:38:07.281155 1234759 addons.go:248] addon dashboard should already be in state true
	I1108 10:38:07.281184 1234759 host.go:66] Checking if "newest-cni-515571" exists ...
	I1108 10:38:07.281620 1234759 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:38:07.281805 1234759 addons.go:70] Setting default-storageclass=true in profile "newest-cni-515571"
	I1108 10:38:07.281833 1234759 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-515571"
	I1108 10:38:07.282123 1234759 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:38:07.283308 1234759 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:38:07.283803 1234759 out.go:179] * Verifying Kubernetes components...
	I1108 10:38:07.287497 1234759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:07.335251 1234759 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:38:07.338418 1234759 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:38:07.341492 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:38:07.341514 1234759 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:38:07.341588 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:07.341962 1234759 addons.go:239] Setting addon default-storageclass=true in "newest-cni-515571"
	W1108 10:38:07.341973 1234759 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:38:07.341997 1234759 host.go:66] Checking if "newest-cni-515571" exists ...
	I1108 10:38:07.342410 1234759 cli_runner.go:164] Run: docker container inspect newest-cni-515571 --format={{.State.Status}}
	I1108 10:38:07.366431 1234759 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:38:03.753237 1235505 out.go:252] * Restarting existing docker container for "no-preload-291044" ...
	I1108 10:38:03.753329 1235505 cli_runner.go:164] Run: docker start no-preload-291044
	I1108 10:38:04.057467 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:04.087682 1235505 kic.go:430] container "no-preload-291044" state is running.
	I1108 10:38:04.088083 1235505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:38:04.114833 1235505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/config.json ...
	I1108 10:38:04.115063 1235505 machine.go:94] provisionDockerMachine start ...
	I1108 10:38:04.115137 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:04.142293 1235505 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:04.142603 1235505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34552 <nil> <nil>}
	I1108 10:38:04.142612 1235505 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:38:04.143286 1235505 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1108 10:38:07.342300 1235505 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-291044
	
	I1108 10:38:07.342316 1235505 ubuntu.go:182] provisioning hostname "no-preload-291044"
	I1108 10:38:07.342360 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:07.403464 1235505 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:07.403770 1235505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34552 <nil> <nil>}
	I1108 10:38:07.403781 1235505 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-291044 && echo "no-preload-291044" | sudo tee /etc/hostname
	I1108 10:38:07.641017 1235505 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-291044
	
	I1108 10:38:07.641108 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:07.685648 1235505 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:07.685949 1235505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34552 <nil> <nil>}
	I1108 10:38:07.685974 1235505 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-291044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-291044/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-291044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:38:07.870020 1235505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:38:07.870055 1235505 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:38:07.870085 1235505 ubuntu.go:190] setting up certificates
	I1108 10:38:07.870103 1235505 provision.go:84] configureAuth start
	I1108 10:38:07.870166 1235505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:38:07.895007 1235505 provision.go:143] copyHostCerts
	I1108 10:38:07.895076 1235505 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:38:07.895097 1235505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:38:07.895183 1235505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:38:07.895294 1235505 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:38:07.895306 1235505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:38:07.895334 1235505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:38:07.895397 1235505 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:38:07.895407 1235505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:38:07.895435 1235505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:38:07.895498 1235505 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.no-preload-291044 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-291044]
	I1108 10:38:08.248829 1235505 provision.go:177] copyRemoteCerts
	I1108 10:38:08.248947 1235505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:38:08.249019 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:08.279121 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:08.394578 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:38:08.433663 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 10:38:08.457735 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 10:38:07.372584 1234759 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:38:07.372610 1234759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:38:07.372682 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:07.437177 1234759 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:38:07.437199 1234759 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:38:07.437260 1234759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-515571
	I1108 10:38:07.437843 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:07.464455 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:07.490160 1234759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/newest-cni-515571/id_rsa Username:docker}
	I1108 10:38:07.753141 1234759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:38:07.795000 1234759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:38:07.829248 1234759 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:38:07.829321 1234759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:38:07.837723 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:38:07.837747 1234759 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:38:07.892998 1234759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:38:07.961494 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:38:07.961515 1234759 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:38:08.056606 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:38:08.056628 1234759 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:38:08.082389 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:38:08.082409 1234759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:38:08.154561 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:38:08.154591 1234759 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:38:08.265932 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:38:08.265961 1234759 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:38:08.327953 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:38:08.327973 1234759 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:38:08.354072 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:38:08.354095 1234759 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:38:08.374911 1234759 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:38:08.374934 1234759 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:38:08.401155 1234759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:38:08.489848 1235505 provision.go:87] duration metric: took 619.71888ms to configureAuth
	I1108 10:38:08.489877 1235505 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:38:08.490081 1235505 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:08.490199 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:08.524065 1235505 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:08.524369 1235505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34552 <nil> <nil>}
	I1108 10:38:08.524391 1235505 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:38:08.942666 1235505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:38:08.942736 1235505 machine.go:97] duration metric: took 4.827663169s to provisionDockerMachine
	I1108 10:38:08.942762 1235505 start.go:293] postStartSetup for "no-preload-291044" (driver="docker")
	I1108 10:38:08.942787 1235505 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:38:08.942897 1235505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:38:08.942973 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:08.975053 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:09.102411 1235505 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:38:09.109110 1235505 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:38:09.109135 1235505 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:38:09.109146 1235505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:38:09.109213 1235505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:38:09.109288 1235505 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:38:09.109389 1235505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:38:09.120992 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:09.150179 1235505 start.go:296] duration metric: took 207.389117ms for postStartSetup
	I1108 10:38:09.150309 1235505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:38:09.150379 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:09.178158 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:09.328172 1235505 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:38:09.337192 1235505 fix.go:56] duration metric: took 5.612351222s for fixHost
	I1108 10:38:09.337219 1235505 start.go:83] releasing machines lock for "no-preload-291044", held for 5.612402945s
	I1108 10:38:09.337291 1235505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-291044
	I1108 10:38:09.372792 1235505 ssh_runner.go:195] Run: cat /version.json
	I1108 10:38:09.372847 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:09.373098 1235505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:38:09.373155 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:09.420131 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:09.422826 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:09.676922 1235505 ssh_runner.go:195] Run: systemctl --version
	I1108 10:38:09.684076 1235505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:38:09.759160 1235505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:38:09.764696 1235505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:38:09.764823 1235505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:38:09.780927 1235505 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 10:38:09.781001 1235505 start.go:496] detecting cgroup driver to use...
	I1108 10:38:09.781046 1235505 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:38:09.781133 1235505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:38:09.804654 1235505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:38:09.822639 1235505 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:38:09.822748 1235505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:38:09.842720 1235505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:38:09.862106 1235505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:38:10.060756 1235505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:38:10.258324 1235505 docker.go:234] disabling docker service ...
	I1108 10:38:10.258398 1235505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:38:10.289695 1235505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:38:10.310524 1235505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:38:10.526912 1235505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:38:10.745187 1235505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:38:10.765740 1235505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:38:10.782315 1235505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:38:10.782432 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.797797 1235505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:38:10.797946 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.813701 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.828991 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.845923 1235505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:38:10.861767 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.877322 1235505 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.886622 1235505 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:10.901987 1235505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:38:10.914049 1235505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:38:10.921921 1235505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:11.142192 1235505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:38:11.390191 1235505 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:38:11.390331 1235505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:38:11.394481 1235505 start.go:564] Will wait 60s for crictl version
	I1108 10:38:11.394626 1235505 ssh_runner.go:195] Run: which crictl
	I1108 10:38:11.398920 1235505 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:38:11.439213 1235505 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:38:11.439387 1235505 ssh_runner.go:195] Run: crio --version
	I1108 10:38:11.483016 1235505 ssh_runner.go:195] Run: crio --version
	I1108 10:38:11.541753 1235505 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:38:11.544664 1235505 cli_runner.go:164] Run: docker network inspect no-preload-291044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:38:11.570953 1235505 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 10:38:11.575198 1235505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:11.587740 1235505 kubeadm.go:884] updating cluster {Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:38:11.587845 1235505 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:11.587885 1235505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:38:11.656851 1235505 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:38:11.656931 1235505 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:38:11.656954 1235505 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 10:38:11.657098 1235505 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-291044 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:38:11.657220 1235505 ssh_runner.go:195] Run: crio config
	I1108 10:38:11.785316 1235505 cni.go:84] Creating CNI manager for ""
	I1108 10:38:11.785384 1235505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:38:11.785420 1235505 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:38:11.785476 1235505 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-291044 NodeName:no-preload-291044 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:38:11.785648 1235505 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-291044"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:38:11.785756 1235505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:38:11.796961 1235505 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:38:11.797076 1235505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:38:11.806445 1235505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 10:38:11.826669 1235505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:38:11.844320 1235505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 10:38:11.876878 1235505 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:38:11.880554 1235505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:11.893106 1235505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:12.088690 1235505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:38:12.106356 1235505 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044 for IP: 192.168.85.2
	I1108 10:38:12.106379 1235505 certs.go:195] generating shared ca certs ...
	I1108 10:38:12.106394 1235505 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:12.106536 1235505 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:38:12.106585 1235505 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:38:12.106599 1235505 certs.go:257] generating profile certs ...
	I1108 10:38:12.106681 1235505 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.key
	I1108 10:38:12.106745 1235505 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key.e7c39ab7
	I1108 10:38:12.106785 1235505 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key
	I1108 10:38:12.106887 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:38:12.106919 1235505 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:38:12.106931 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:38:12.106958 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:38:12.106982 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:38:12.107013 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:38:12.107059 1235505 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:12.112564 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:38:12.177235 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:38:12.214067 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:38:12.244879 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:38:12.285111 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 10:38:12.325139 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:38:12.399632 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:38:12.457923 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:38:12.495066 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:38:12.543949 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:38:12.571475 1235505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:38:12.595340 1235505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:38:12.610475 1235505 ssh_runner.go:195] Run: openssl version
	I1108 10:38:12.618186 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:38:12.627430 1235505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:12.632377 1235505 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:12.632523 1235505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:12.676985 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:38:12.686285 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:38:12.695613 1235505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:38:12.700061 1235505 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:38:12.700126 1235505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:38:12.748238 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:38:12.759189 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:38:12.768022 1235505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:38:12.775435 1235505 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:38:12.775504 1235505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:38:12.817853 1235505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:38:12.830631 1235505 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:38:12.836260 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 10:38:12.897293 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 10:38:12.951287 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 10:38:13.020868 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 10:38:13.117327 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 10:38:13.188198 1235505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 10:38:13.306290 1235505 kubeadm.go:401] StartCluster: {Name:no-preload-291044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-291044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:38:13.306394 1235505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:38:13.306459 1235505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:38:13.423504 1235505 cri.go:89] found id: "5ff011c39fa1a4e6ccf1602407612d6fd09adb5c8853548d45cbc57693896266"
	I1108 10:38:13.423526 1235505 cri.go:89] found id: "99b5f6a8373260a1fb2a88d8f9ff8805d70fb0e4e09b4e2bea1c955d090e83a3"
	I1108 10:38:13.423531 1235505 cri.go:89] found id: ""
	I1108 10:38:13.423580 1235505 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 10:38:13.477357 1235505 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:38:13Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:38:13.477459 1235505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:38:13.520913 1235505 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 10:38:13.520935 1235505 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 10:38:13.520997 1235505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 10:38:13.565748 1235505 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 10:38:13.566317 1235505 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-291044" does not appear in /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:13.566567 1235505 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-1027379/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-291044" cluster setting kubeconfig missing "no-preload-291044" context setting]
	I1108 10:38:13.567022 1235505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:13.568417 1235505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 10:38:13.596855 1235505 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 10:38:13.596887 1235505 kubeadm.go:602] duration metric: took 75.945699ms to restartPrimaryControlPlane
	I1108 10:38:13.596897 1235505 kubeadm.go:403] duration metric: took 290.618895ms to StartCluster
	I1108 10:38:13.596916 1235505 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:13.596982 1235505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:13.597848 1235505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:13.598080 1235505 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:38:13.598434 1235505 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:13.598438 1235505 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:38:13.598561 1235505 addons.go:70] Setting storage-provisioner=true in profile "no-preload-291044"
	I1108 10:38:13.598570 1235505 addons.go:70] Setting dashboard=true in profile "no-preload-291044"
	I1108 10:38:13.598576 1235505 addons.go:239] Setting addon storage-provisioner=true in "no-preload-291044"
	W1108 10:38:13.598583 1235505 addons.go:248] addon storage-provisioner should already be in state true
	I1108 10:38:13.598590 1235505 addons.go:70] Setting default-storageclass=true in profile "no-preload-291044"
	I1108 10:38:13.598601 1235505 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-291044"
	I1108 10:38:13.598608 1235505 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:38:13.598890 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:13.599048 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:13.598583 1235505 addons.go:239] Setting addon dashboard=true in "no-preload-291044"
	W1108 10:38:13.600680 1235505 addons.go:248] addon dashboard should already be in state true
	I1108 10:38:13.600713 1235505 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:38:13.601157 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:13.606479 1235505 out.go:179] * Verifying Kubernetes components...
	I1108 10:38:13.609608 1235505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:13.641415 1235505 addons.go:239] Setting addon default-storageclass=true in "no-preload-291044"
	W1108 10:38:13.641439 1235505 addons.go:248] addon default-storageclass should already be in state true
	I1108 10:38:13.641465 1235505 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:38:13.641881 1235505 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:38:13.658693 1235505 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:38:13.662164 1235505 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:38:13.662188 1235505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:38:13.662256 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:13.675023 1235505 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:38:13.675049 1235505 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:38:13.675129 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:13.682720 1235505 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 10:38:13.690538 1235505 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 10:38:17.749646 1234759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.954561881s)
	I1108 10:38:17.749711 1234759 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (9.920381239s)
	I1108 10:38:17.749724 1234759 api_server.go:72] duration metric: took 10.472273606s to wait for apiserver process to appear ...
	I1108 10:38:17.749729 1234759 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:38:17.749745 1234759 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 10:38:17.750070 1234759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.857049914s)
	I1108 10:38:17.750366 1234759 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.349183287s)
	I1108 10:38:17.753293 1234759 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-515571 addons enable metrics-server
	
	I1108 10:38:17.778749 1234759 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 10:38:17.780023 1234759 api_server.go:141] control plane version: v1.34.1
	I1108 10:38:17.780048 1234759 api_server.go:131] duration metric: took 30.312096ms to wait for apiserver health ...
	I1108 10:38:17.780075 1234759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:38:17.786382 1234759 system_pods.go:59] 8 kube-system pods found
	I1108 10:38:17.786421 1234759 system_pods.go:61] "coredns-66bc5c9577-tzpcv" [e29d787c-07fa-45a9-8486-67e87bde431e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 10:38:17.786431 1234759 system_pods.go:61] "etcd-newest-cni-515571" [5340f708-b23d-4f0b-bda7-995b964333e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:38:17.786437 1234759 system_pods.go:61] "kindnet-6vtjh" [69f8e634-a5cb-438a-a6ac-5762a43d39e5] Running
	I1108 10:38:17.786445 1234759 system_pods.go:61] "kube-apiserver-newest-cni-515571" [82e0acec-a5e0-43ed-b26f-072f360ced86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:38:17.786461 1234759 system_pods.go:61] "kube-controller-manager-newest-cni-515571" [3966d3a4-3fac-4d01-858a-27ad292e0b25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:38:17.786470 1234759 system_pods.go:61] "kube-proxy-cqlhl" [0385ed05-d22d-4bb0-b165-eeb7226e70fd] Running
	I1108 10:38:17.786478 1234759 system_pods.go:61] "kube-scheduler-newest-cni-515571" [6e339344-0ff3-412a-b78f-55ef23e04a9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:38:17.786489 1234759 system_pods.go:61] "storage-provisioner" [db0e8015-0d1b-4030-ad64-744fe3afd379] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 10:38:17.786496 1234759 system_pods.go:74] duration metric: took 6.414219ms to wait for pod list to return data ...
	I1108 10:38:17.786510 1234759 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:38:17.798309 1234759 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 10:38:17.798781 1234759 default_sa.go:45] found service account: "default"
	I1108 10:38:17.798805 1234759 default_sa.go:55] duration metric: took 12.287661ms for default service account to be created ...
	I1108 10:38:17.798818 1234759 kubeadm.go:587] duration metric: took 10.521365716s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 10:38:17.798837 1234759 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:38:17.801047 1234759 addons.go:515] duration metric: took 10.523195791s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 10:38:17.805520 1234759 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:38:17.805567 1234759 node_conditions.go:123] node cpu capacity is 2
	I1108 10:38:17.805580 1234759 node_conditions.go:105] duration metric: took 6.73754ms to run NodePressure ...
	I1108 10:38:17.805591 1234759 start.go:242] waiting for startup goroutines ...
	I1108 10:38:17.805599 1234759 start.go:247] waiting for cluster config update ...
	I1108 10:38:17.805611 1234759 start.go:256] writing updated cluster config ...
	I1108 10:38:17.805904 1234759 ssh_runner.go:195] Run: rm -f paused
	I1108 10:38:17.906779 1234759 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:38:17.909800 1234759 out.go:179] * Done! kubectl is now configured to use "newest-cni-515571" cluster and "default" namespace by default
	I1108 10:38:13.693458 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 10:38:13.693481 1235505 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 10:38:13.693556 1235505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:38:13.724704 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:13.725365 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:13.750023 1235505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:38:14.151273 1235505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:38:14.173840 1235505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:38:14.199409 1235505 node_ready.go:35] waiting up to 6m0s for node "no-preload-291044" to be "Ready" ...
	I1108 10:38:14.250703 1235505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:38:14.265538 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 10:38:14.265559 1235505 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 10:38:14.336710 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 10:38:14.336731 1235505 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 10:38:14.456827 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 10:38:14.456849 1235505 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 10:38:14.599095 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 10:38:14.599159 1235505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 10:38:14.761828 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 10:38:14.761894 1235505 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 10:38:14.813101 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 10:38:14.813168 1235505 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 10:38:14.850022 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 10:38:14.850097 1235505 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 10:38:14.876116 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 10:38:14.876180 1235505 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 10:38:14.920809 1235505 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:38:14.920873 1235505 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 10:38:14.956980 1235505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 10:38:20.574145 1235505 node_ready.go:49] node "no-preload-291044" is "Ready"
	I1108 10:38:20.574171 1235505 node_ready.go:38] duration metric: took 6.37466936s for node "no-preload-291044" to be "Ready" ...
	I1108 10:38:20.574184 1235505 api_server.go:52] waiting for apiserver process to appear ...
	I1108 10:38:20.574267 1235505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:38:22.843085 1235505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.669169909s)
	I1108 10:38:22.843144 1235505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.592423938s)
	I1108 10:38:22.843394 1235505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.886344558s)
	I1108 10:38:22.843630 1235505 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.269335747s)
	I1108 10:38:22.843651 1235505 api_server.go:72] duration metric: took 9.24553997s to wait for apiserver process to appear ...
	I1108 10:38:22.843657 1235505 api_server.go:88] waiting for apiserver healthz status ...
	I1108 10:38:22.843673 1235505 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 10:38:22.846565 1235505 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-291044 addons enable metrics-server
	
	I1108 10:38:22.857813 1235505 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 10:38:22.858905 1235505 api_server.go:141] control plane version: v1.34.1
	I1108 10:38:22.858925 1235505 api_server.go:131] duration metric: took 15.262219ms to wait for apiserver health ...
	I1108 10:38:22.858934 1235505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 10:38:22.862922 1235505 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 10:38:22.864019 1235505 system_pods.go:59] 8 kube-system pods found
	I1108 10:38:22.864056 1235505 system_pods.go:61] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:38:22.864067 1235505 system_pods.go:61] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:38:22.864077 1235505 system_pods.go:61] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:38:22.864093 1235505 system_pods.go:61] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:38:22.864100 1235505 system_pods.go:61] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:38:22.864109 1235505 system_pods.go:61] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:38:22.864121 1235505 system_pods.go:61] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:38:22.864126 1235505 system_pods.go:61] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Running
	I1108 10:38:22.864134 1235505 system_pods.go:74] duration metric: took 5.194653ms to wait for pod list to return data ...
	I1108 10:38:22.864145 1235505 default_sa.go:34] waiting for default service account to be created ...
	I1108 10:38:22.865569 1235505 addons.go:515] duration metric: took 9.267137625s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 10:38:22.867400 1235505 default_sa.go:45] found service account: "default"
	I1108 10:38:22.867450 1235505 default_sa.go:55] duration metric: took 3.298266ms for default service account to be created ...
	I1108 10:38:22.867474 1235505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 10:38:22.870501 1235505 system_pods.go:86] 8 kube-system pods found
	I1108 10:38:22.870564 1235505 system_pods.go:89] "coredns-66bc5c9577-nvtlg" [87be45de-22b0-41ae-8e64-a2bbdcdad8cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 10:38:22.870593 1235505 system_pods.go:89] "etcd-no-preload-291044" [1daf564a-005f-481a-8768-c0a804fc20c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 10:38:22.870630 1235505 system_pods.go:89] "kindnet-nct2b" [0bc61516-3295-45ae-8385-f44884db443d] Running
	I1108 10:38:22.870656 1235505 system_pods.go:89] "kube-apiserver-no-preload-291044" [da078cda-3142-425e-89aa-bd719fb5a5b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 10:38:22.870684 1235505 system_pods.go:89] "kube-controller-manager-no-preload-291044" [93a1bbad-1acb-4644-9638-a271e86cfaa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 10:38:22.870710 1235505 system_pods.go:89] "kube-proxy-2m8tx" [ef25d22a-5d36-45dd-b9c5-2a78edcf33ef] Running
	I1108 10:38:22.870743 1235505 system_pods.go:89] "kube-scheduler-no-preload-291044" [9ba6e37a-745f-4b91-babe-9f55878f29cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 10:38:22.870775 1235505 system_pods.go:89] "storage-provisioner" [a4a078b4-83c3-48a1-9d2d-d92b0275ba61] Running
	I1108 10:38:22.870800 1235505 system_pods.go:126] duration metric: took 3.30588ms to wait for k8s-apps to be running ...
	I1108 10:38:22.870823 1235505 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 10:38:22.870903 1235505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:38:22.887850 1235505 system_svc.go:56] duration metric: took 17.019124ms WaitForService to wait for kubelet
	I1108 10:38:22.887918 1235505 kubeadm.go:587] duration metric: took 9.28980463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:38:22.887952 1235505 node_conditions.go:102] verifying NodePressure condition ...
	I1108 10:38:22.893078 1235505 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1108 10:38:22.893155 1235505 node_conditions.go:123] node cpu capacity is 2
	I1108 10:38:22.893182 1235505 node_conditions.go:105] duration metric: took 5.207715ms to run NodePressure ...
	I1108 10:38:22.893225 1235505 start.go:242] waiting for startup goroutines ...
	I1108 10:38:22.893252 1235505 start.go:247] waiting for cluster config update ...
	I1108 10:38:22.893280 1235505 start.go:256] writing updated cluster config ...
	I1108 10:38:22.893591 1235505 ssh_runner.go:195] Run: rm -f paused
	I1108 10:38:22.897680 1235505 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:38:22.901662 1235505 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nvtlg" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.352929715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.364622448Z" level=info msg="Running pod sandbox: kube-system/kindnet-6vtjh/POD" id=730897a0-2964-4dcd-9f19-56ec7b64390b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.364709247Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.405175988Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ae8b5de0-459a-4110-a171-d946ab05e2ae name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.412107968Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=730897a0-2964-4dcd-9f19-56ec7b64390b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.441864101Z" level=info msg="Ran pod sandbox f466d952445e030965d8f99dc737b37e80a0d29dc6c09b577cb2c582f76cdd54 with infra container: kube-system/kube-proxy-cqlhl/POD" id=ae8b5de0-459a-4110-a171-d946ab05e2ae name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.453799617Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3d2cbd81-eb68-4be8-aa0d-02186e296f24 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.460388476Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a91eb870-49f5-4db7-81f0-036fdc259d26 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.465733073Z" level=info msg="Creating container: kube-system/kube-proxy-cqlhl/kube-proxy" id=9005c773-b31b-4598-a527-aafa63386d1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.465834026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.483236301Z" level=info msg="Ran pod sandbox 8f2355aaa4c3e2bc4dc78326912d7f7bae1deef7b8b08ef39d1f300d55f0a4b2 with infra container: kube-system/kindnet-6vtjh/POD" id=730897a0-2964-4dcd-9f19-56ec7b64390b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.49560103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.496174683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.507527397Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=de701164-57f0-4312-90bf-dd07925a4cbe name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.518457831Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7b96f13c-7f45-4325-af22-0a67b8bdb7d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.548121708Z" level=info msg="Creating container: kube-system/kindnet-6vtjh/kindnet-cni" id=8ab0e51c-a1e8-4c3a-9551-865c949cc894 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.548215695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.561070626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.565602381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.608215208Z" level=info msg="Created container aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345: kube-system/kube-proxy-cqlhl/kube-proxy" id=9005c773-b31b-4598-a527-aafa63386d1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.609380384Z" level=info msg="Starting container: aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345" id=10a64431-9de8-439b-b134-60928d60effc name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.612090302Z" level=info msg="Started container" PID=1058 containerID=aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345 description=kube-system/kube-proxy-cqlhl/kube-proxy id=10a64431-9de8-439b-b134-60928d60effc name=/runtime.v1.RuntimeService/StartContainer sandboxID=f466d952445e030965d8f99dc737b37e80a0d29dc6c09b577cb2c582f76cdd54
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.686085635Z" level=info msg="Created container 93511cf560575ebe917dce5846ff27235243c676c68ce71935565137b991bee0: kube-system/kindnet-6vtjh/kindnet-cni" id=8ab0e51c-a1e8-4c3a-9551-865c949cc894 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.686974084Z" level=info msg="Starting container: 93511cf560575ebe917dce5846ff27235243c676c68ce71935565137b991bee0" id=235a1066-2544-48e6-b5bf-1178b13df11a name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:38:16 newest-cni-515571 crio[613]: time="2025-11-08T10:38:16.690252822Z" level=info msg="Started container" PID=1068 containerID=93511cf560575ebe917dce5846ff27235243c676c68ce71935565137b991bee0 description=kube-system/kindnet-6vtjh/kindnet-cni id=235a1066-2544-48e6-b5bf-1178b13df11a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f2355aaa4c3e2bc4dc78326912d7f7bae1deef7b8b08ef39d1f300d55f0a4b2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	93511cf560575       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   8f2355aaa4c3e       kindnet-6vtjh                               kube-system
	aa314f8fce25c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   f466d952445e0       kube-proxy-cqlhl                            kube-system
	93d0c8e070cb6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago      Running             etcd                      1                   b24ae7dd8a615       etcd-newest-cni-515571                      kube-system
	38bc479dedc5a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago      Running             kube-scheduler            1                   fa232d3318380       kube-scheduler-newest-cni-515571            kube-system
	02f8d0ac9dba3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago      Running             kube-controller-manager   1                   995be64be113b       kube-controller-manager-newest-cni-515571   kube-system
	8028c5744e9c2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago      Running             kube-apiserver            1                   20e3366c89337       kube-apiserver-newest-cni-515571            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-515571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-515571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=newest-cni-515571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_37_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:37:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-515571
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:38:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:38:14 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:38:14 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:38:14 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 10:38:14 +0000   Sat, 08 Nov 2025 10:37:42 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-515571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                da96ae8e-28b2-4384-8ee4-16fe0d13fbbb
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-515571                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-6vtjh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-515571             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-515571    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-cqlhl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-515571             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientPID     36s                kubelet          Node newest-cni-515571 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node newest-cni-515571 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node newest-cni-515571 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           33s                node-controller  Node newest-cni-515571 event: Registered Node newest-cni-515571 in Controller
	  Normal   Starting                 19s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node newest-cni-515571 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node newest-cni-515571 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x8 over 19s)  kubelet          Node newest-cni-515571 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7s                 node-controller  Node newest-cni-515571 event: Registered Node newest-cni-515571 in Controller
	
	
	==> dmesg <==
	[Nov 8 10:17] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:36] overlayfs: idmapped layers are currently not supported
	[ +30.788294] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:38] overlayfs: idmapped layers are currently not supported
	[  +6.100629] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [93d0c8e070cb668a489e9ad2a2665a4c28e5a124650ce0d95549a343c79037a0] <==
	{"level":"warn","ts":"2025-11-08T10:38:11.889884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:11.904727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:11.923301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:11.959937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:11.988964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.005700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.061530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.078088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.113153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.161154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.206087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.238546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.256335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.284171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.318650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.336003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.379211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.409527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.438668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.464499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.517060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.556351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.577738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.586582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:12.770204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60682","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:38:25 up  9:20,  0 user,  load average: 6.52, 4.45, 3.38
	Linux newest-cni-515571 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [93511cf560575ebe917dce5846ff27235243c676c68ce71935565137b991bee0] <==
	I1108 10:38:16.819866       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:38:16.820259       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 10:38:16.821072       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:38:16.821138       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:38:16.821172       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:38:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:38:17.014069       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:38:17.014160       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:38:17.014193       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:38:17.016260       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [8028c5744e9c2fa0cbfd055e941f992b8050ed81b1668d7cdfad5fcf592a4fea] <==
	I1108 10:38:14.630045       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 10:38:14.647327       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 10:38:14.647401       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:38:14.647474       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 10:38:14.647508       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 10:38:14.661277       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:38:14.661311       1 policy_source.go:240] refreshing policies
	I1108 10:38:14.661962       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:38:14.681494       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:38:14.682145       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:38:14.682158       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:38:14.683598       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1108 10:38:14.814235       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:38:15.174313       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:38:16.159179       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:38:16.933701       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:38:17.033836       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:38:17.083005       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:38:17.094758       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:38:17.240811       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.189.32"}
	I1108 10:38:17.300846       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.237.151"}
	I1108 10:38:19.087286       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:38:19.202050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:38:19.260248       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 10:38:19.340975       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [02f8d0ac9dba3db69b485cb9b56006f12f108e27f5767ecbcca542963009eec6] <==
	I1108 10:38:18.888535       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 10:38:18.890924       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 10:38:18.891183       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 10:38:18.897127       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 10:38:18.900865       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:38:18.905381       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:38:18.905774       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:38:18.925418       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 10:38:18.925661       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 10:38:18.951684       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:38:18.960742       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 10:38:18.960789       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 10:38:18.974722       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 10:38:18.975098       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 10:38:18.975352       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-515571"
	I1108 10:38:18.975467       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 10:38:18.976153       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 10:38:18.976458       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:38:18.976475       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:38:18.976482       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:38:18.985117       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:38:18.996195       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 10:38:18.996387       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:38:19.000599       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:38:19.004007       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345] <==
	I1108 10:38:17.533333       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:38:17.978374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:38:18.079425       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:38:18.079546       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 10:38:18.079673       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:38:18.251606       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:38:18.251667       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:38:18.262913       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:38:18.263255       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:38:18.263279       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:38:18.276242       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:38:18.276268       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:38:18.276619       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:38:18.276696       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:38:18.278433       1 config.go:200] "Starting service config controller"
	I1108 10:38:18.278453       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:38:18.278526       1 config.go:309] "Starting node config controller"
	I1108 10:38:18.278536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:38:18.278542       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:38:18.380163       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 10:38:18.380232       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 10:38:18.380513       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [38bc479dedc5ae4fd9d713123be920853a980f8e2e86f024661007578f58babe] <==
	I1108 10:38:12.147391       1 serving.go:386] Generated self-signed cert in-memory
	I1108 10:38:16.564851       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:38:16.564883       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:38:16.631524       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:38:16.631634       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 10:38:16.631655       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 10:38:16.631678       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:38:16.633973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:38:16.633987       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:38:16.634006       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:38:16.634043       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:38:16.732126       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 10:38:16.737774       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 10:38:16.737882       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.575339     725 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:newest-cni-515571\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-515571' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.575377     725 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-newest-cni-515571\" is forbidden: User \"system:node:newest-cni-515571\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-515571' and this object" podUID="2ae1802e0a9b46b324af26050bccbc9a" pod="kube-system/kube-scheduler-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.616663     725 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-cqlhl\" is forbidden: User \"system:node:newest-cni-515571\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-515571' and this object" podUID="0385ed05-d22d-4bb0-b165-eeb7226e70fd" pod="kube-system/kube-proxy-cqlhl"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.754409     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-515571\" already exists" pod="kube-system/kube-scheduler-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.754447     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.819816     725 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.819907     725 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.819932     725 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.824101     725 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.861123     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-515571\" already exists" pod="kube-system/etcd-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.861160     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.915788     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-515571\" already exists" pod="kube-system/kube-apiserver-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: I1108 10:38:14.915822     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-515571"
	Nov 08 10:38:14 newest-cni-515571 kubelet[725]: E1108 10:38:14.970184     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-515571\" already exists" pod="kube-system/kube-controller-manager-newest-cni-515571"
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577474     725 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577513     725 projected.go:196] Error preparing data for projected volume kube-api-access-28jx7 for pod kube-system/kindnet-6vtjh: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-515571" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-515571' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577596     725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69f8e634-a5cb-438a-a6ac-5762a43d39e5-kube-api-access-28jx7 podName:69f8e634-a5cb-438a-a6ac-5762a43d39e5 nodeName:}" failed. No retries permitted until 2025-11-08 10:38:16.07756959 +0000 UTC m=+9.734043952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-28jx7" (UniqueName: "kubernetes.io/projected/69f8e634-a5cb-438a-a6ac-5762a43d39e5-kube-api-access-28jx7") pod "kindnet-6vtjh" (UID: "69f8e634-a5cb-438a-a6ac-5762a43d39e5") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-515571" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-515571' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577639     725 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577653     725 projected.go:196] Error preparing data for projected volume kube-api-access-k45bh for pod kube-system/kube-proxy-cqlhl: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-515571" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-515571' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Nov 08 10:38:15 newest-cni-515571 kubelet[725]: E1108 10:38:15.577688     725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0385ed05-d22d-4bb0-b165-eeb7226e70fd-kube-api-access-k45bh podName:0385ed05-d22d-4bb0-b165-eeb7226e70fd nodeName:}" failed. No retries permitted until 2025-11-08 10:38:16.07767787 +0000 UTC m=+9.734152232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k45bh" (UniqueName: "kubernetes.io/projected/0385ed05-d22d-4bb0-b165-eeb7226e70fd-kube-api-access-k45bh") pod "kube-proxy-cqlhl" (UID: "0385ed05-d22d-4bb0-b165-eeb7226e70fd") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-515571" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-515571' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Nov 08 10:38:16 newest-cni-515571 kubelet[725]: I1108 10:38:16.220606     725 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 08 10:38:16 newest-cni-515571 kubelet[725]: E1108 10:38:16.584850     725 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/f94bf5ad2ae9d7e18ff7eeff1e486e27fc90cf5115df8c64c64a9f9548c1fc1d/crio/crio-aa314f8fce25caf9ace4695d7ddf949c4f86848d94961122deda5516c541c345\": RecentStats: unable to find data in memory cache]"
	Nov 08 10:38:19 newest-cni-515571 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:38:19 newest-cni-515571 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:38:19 newest-cni-515571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-515571 -n newest-cni-515571
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-515571 -n newest-cni-515571: exit status 2 (398.102851ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-515571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-tzpcv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-2hmg6 kubernetes-dashboard-855c9754f9-ngdnl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-515571 describe pod coredns-66bc5c9577-tzpcv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-2hmg6 kubernetes-dashboard-855c9754f9-ngdnl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-515571 describe pod coredns-66bc5c9577-tzpcv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-2hmg6 kubernetes-dashboard-855c9754f9-ngdnl: exit status 1 (104.963955ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-tzpcv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-2hmg6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-ngdnl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-515571 describe pod coredns-66bc5c9577-tzpcv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-2hmg6 kubernetes-dashboard-855c9754f9-ngdnl: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-291044 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-291044 --alsologtostderr -v=1: exit status 80 (2.142370165s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-291044 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:39:09.325148 1242220 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:39:09.325335 1242220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:39:09.325350 1242220 out.go:374] Setting ErrFile to fd 2...
	I1108 10:39:09.325356 1242220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:39:09.325649 1242220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:39:09.325950 1242220 out.go:368] Setting JSON to false
	I1108 10:39:09.326037 1242220 mustload.go:66] Loading cluster: no-preload-291044
	I1108 10:39:09.326490 1242220 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:39:09.327034 1242220 cli_runner.go:164] Run: docker container inspect no-preload-291044 --format={{.State.Status}}
	I1108 10:39:09.344574 1242220 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:39:09.344887 1242220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:39:09.459635 1242220 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:39:09.44785833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:39:09.460310 1242220 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-291044 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 10:39:09.463938 1242220 out.go:179] * Pausing node no-preload-291044 ... 
	I1108 10:39:09.466895 1242220 host.go:66] Checking if "no-preload-291044" exists ...
	I1108 10:39:09.467237 1242220 ssh_runner.go:195] Run: systemctl --version
	I1108 10:39:09.467289 1242220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291044
	I1108 10:39:09.503384 1242220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/no-preload-291044/id_rsa Username:docker}
	I1108 10:39:09.632051 1242220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:39:09.647889 1242220 pause.go:52] kubelet running: true
	I1108 10:39:09.647992 1242220 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:39:09.923709 1242220 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:39:09.923801 1242220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:39:10.015865 1242220 cri.go:89] found id: "65cbe2bb9985bf3d82c006541771b098511632bf16f3207681bdffd6065d3a5a"
	I1108 10:39:10.015890 1242220 cri.go:89] found id: "c33eeb214e958294220dbe340086eab0da97ee59bafe81bc2bc509133f4b77b0"
	I1108 10:39:10.015896 1242220 cri.go:89] found id: "ab334d5bd7ba72aea7af822ddc9751317c502a94c09a2740bcae5d2371922e43"
	I1108 10:39:10.015900 1242220 cri.go:89] found id: "d1dbd6cc1f1dc5794d5f0bdb0bec35359fb7abbfb462e4a28128e68598c92cad"
	I1108 10:39:10.015903 1242220 cri.go:89] found id: "5b59e1565e30b8151a42f654301131ca5a9b85a2c6f83767a903111bd6f7c44b"
	I1108 10:39:10.015906 1242220 cri.go:89] found id: "5ff011c39fa1a4e6ccf1602407612d6fd09adb5c8853548d45cbc57693896266"
	I1108 10:39:10.015928 1242220 cri.go:89] found id: "fef0c37718a669a3a308b4a0ee7aa3629f5c411a3f86070c7497fead7a730494"
	I1108 10:39:10.015945 1242220 cri.go:89] found id: "99b5f6a8373260a1fb2a88d8f9ff8805d70fb0e4e09b4e2bea1c955d090e83a3"
	I1108 10:39:10.015949 1242220 cri.go:89] found id: "daf6ee479a7cae60eb0974a556bff3ab215747a99f91f962708a80a61d9ba6f5"
	I1108 10:39:10.015960 1242220 cri.go:89] found id: "9ae590763b2f2cda1d610cc1f78b2ea77114a7740040e661837d6264d55fa642"
	I1108 10:39:10.015969 1242220 cri.go:89] found id: "6c6e800b9b138a613ccf880559f7dab5ee4100ad4b76378594c6f7fa68a7d4af"
	I1108 10:39:10.015972 1242220 cri.go:89] found id: ""
	I1108 10:39:10.016037 1242220 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:39:10.040610 1242220 retry.go:31] will retry after 165.349777ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:39:10Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:39:10.207037 1242220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:39:10.220535 1242220 pause.go:52] kubelet running: false
	I1108 10:39:10.220617 1242220 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:39:10.481676 1242220 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:39:10.481806 1242220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:39:10.645878 1242220 cri.go:89] found id: "65cbe2bb9985bf3d82c006541771b098511632bf16f3207681bdffd6065d3a5a"
	I1108 10:39:10.645897 1242220 cri.go:89] found id: "c33eeb214e958294220dbe340086eab0da97ee59bafe81bc2bc509133f4b77b0"
	I1108 10:39:10.645902 1242220 cri.go:89] found id: "ab334d5bd7ba72aea7af822ddc9751317c502a94c09a2740bcae5d2371922e43"
	I1108 10:39:10.645905 1242220 cri.go:89] found id: "d1dbd6cc1f1dc5794d5f0bdb0bec35359fb7abbfb462e4a28128e68598c92cad"
	I1108 10:39:10.645909 1242220 cri.go:89] found id: "5b59e1565e30b8151a42f654301131ca5a9b85a2c6f83767a903111bd6f7c44b"
	I1108 10:39:10.645912 1242220 cri.go:89] found id: "5ff011c39fa1a4e6ccf1602407612d6fd09adb5c8853548d45cbc57693896266"
	I1108 10:39:10.645915 1242220 cri.go:89] found id: "fef0c37718a669a3a308b4a0ee7aa3629f5c411a3f86070c7497fead7a730494"
	I1108 10:39:10.645918 1242220 cri.go:89] found id: "99b5f6a8373260a1fb2a88d8f9ff8805d70fb0e4e09b4e2bea1c955d090e83a3"
	I1108 10:39:10.645935 1242220 cri.go:89] found id: "daf6ee479a7cae60eb0974a556bff3ab215747a99f91f962708a80a61d9ba6f5"
	I1108 10:39:10.645942 1242220 cri.go:89] found id: "9ae590763b2f2cda1d610cc1f78b2ea77114a7740040e661837d6264d55fa642"
	I1108 10:39:10.645945 1242220 cri.go:89] found id: "6c6e800b9b138a613ccf880559f7dab5ee4100ad4b76378594c6f7fa68a7d4af"
	I1108 10:39:10.645948 1242220 cri.go:89] found id: ""
	I1108 10:39:10.645995 1242220 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:39:10.688918 1242220 retry.go:31] will retry after 213.090405ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:39:10Z" level=error msg="open /run/runc: no such file or directory"
	I1108 10:39:10.902220 1242220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:39:10.924222 1242220 pause.go:52] kubelet running: false
	I1108 10:39:10.924287 1242220 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 10:39:11.236008 1242220 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 10:39:11.236085 1242220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 10:39:11.373560 1242220 cri.go:89] found id: "65cbe2bb9985bf3d82c006541771b098511632bf16f3207681bdffd6065d3a5a"
	I1108 10:39:11.373589 1242220 cri.go:89] found id: "c33eeb214e958294220dbe340086eab0da97ee59bafe81bc2bc509133f4b77b0"
	I1108 10:39:11.373595 1242220 cri.go:89] found id: "ab334d5bd7ba72aea7af822ddc9751317c502a94c09a2740bcae5d2371922e43"
	I1108 10:39:11.373600 1242220 cri.go:89] found id: "d1dbd6cc1f1dc5794d5f0bdb0bec35359fb7abbfb462e4a28128e68598c92cad"
	I1108 10:39:11.373603 1242220 cri.go:89] found id: "5b59e1565e30b8151a42f654301131ca5a9b85a2c6f83767a903111bd6f7c44b"
	I1108 10:39:11.373606 1242220 cri.go:89] found id: "5ff011c39fa1a4e6ccf1602407612d6fd09adb5c8853548d45cbc57693896266"
	I1108 10:39:11.373609 1242220 cri.go:89] found id: "fef0c37718a669a3a308b4a0ee7aa3629f5c411a3f86070c7497fead7a730494"
	I1108 10:39:11.373612 1242220 cri.go:89] found id: "99b5f6a8373260a1fb2a88d8f9ff8805d70fb0e4e09b4e2bea1c955d090e83a3"
	I1108 10:39:11.373616 1242220 cri.go:89] found id: "daf6ee479a7cae60eb0974a556bff3ab215747a99f91f962708a80a61d9ba6f5"
	I1108 10:39:11.373622 1242220 cri.go:89] found id: "9ae590763b2f2cda1d610cc1f78b2ea77114a7740040e661837d6264d55fa642"
	I1108 10:39:11.373625 1242220 cri.go:89] found id: "6c6e800b9b138a613ccf880559f7dab5ee4100ad4b76378594c6f7fa68a7d4af"
	I1108 10:39:11.373628 1242220 cri.go:89] found id: ""
	I1108 10:39:11.373677 1242220 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 10:39:11.396070 1242220 out.go:203] 
	W1108 10:39:11.399042 1242220 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:39:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T10:39:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 10:39:11.399065 1242220 out.go:285] * 
	* 
	W1108 10:39:11.408049 1242220 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 10:39:11.411018 1242220 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-291044 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-291044
helpers_test.go:243: (dbg) docker inspect no-preload-291044:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a",
	        "Created": "2025-11-08T10:36:27.945864714Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1235674,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:38:03.785005639Z",
	            "FinishedAt": "2025-11-08T10:38:02.824539639Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/hostname",
	        "HostsPath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/hosts",
	        "LogPath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a-json.log",
	        "Name": "/no-preload-291044",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-291044:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-291044",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a",
	                "LowerDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-291044",
	                "Source": "/var/lib/docker/volumes/no-preload-291044/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-291044",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-291044",
	                "name.minikube.sigs.k8s.io": "no-preload-291044",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "104edd42bdd92bee327412227bf59e111db6cecbfa395faf1287a2085a42f70d",
	            "SandboxKey": "/var/run/docker/netns/104edd42bdd9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34552"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34553"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34556"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34554"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34555"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-291044": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:ae:6f:a2:3e:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "15d9ca830af40cf01657fa03afa3cf3bcbb4c14b9a6b5c8dfc90bca89de4ebc4",
	                    "EndpointID": "8442ee0cc2e5f378efe33e4537de30eece01257c82228a7ae3e104e55606d85d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-291044",
	                        "4dafcc75ae9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-291044 -n no-preload-291044
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-291044 -n no-preload-291044: exit status 2 (539.428424ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-291044 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-291044 logs -n 25: (1.536680188s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-236075 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-553553                                                                                                                                                                                                               │ disable-driver-mounts-553553 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:37 UTC │
	│ image   │ embed-certs-790346 image list --format=json                                                                                                                                                                                                   │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-790346 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-291044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ stop    │ -p no-preload-291044 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p newest-cni-515571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ stop    │ -p newest-cni-515571 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-515571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:38 UTC │
	│ addons  │ enable dashboard -p no-preload-291044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ image   │ newest-cni-515571 image list --format=json                                                                                                                                                                                                    │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-515571 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │                     │
	│ delete  │ -p newest-cni-515571                                                                                                                                                                                                                          │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ delete  │ -p newest-cni-515571                                                                                                                                                                                                                          │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ start   │ -p auto-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-731120                  │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │                     │
	│ image   │ no-preload-291044 image list --format=json                                                                                                                                                                                                    │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:39 UTC │ 08 Nov 25 10:39 UTC │
	│ pause   │ -p no-preload-291044 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:38:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:38:29.139048 1239783 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:38:29.139359 1239783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:38:29.139392 1239783 out.go:374] Setting ErrFile to fd 2...
	I1108 10:38:29.139411 1239783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:38:29.139713 1239783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:38:29.140171 1239783 out.go:368] Setting JSON to false
	I1108 10:38:29.141406 1239783 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33655,"bootTime":1762564655,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:38:29.141503 1239783 start.go:143] virtualization:  
	I1108 10:38:29.149275 1239783 out.go:179] * [auto-731120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:38:29.153549 1239783 notify.go:221] Checking for updates...
	I1108 10:38:29.158325 1239783 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:38:29.161841 1239783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:38:29.165487 1239783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:29.168385 1239783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:38:29.172710 1239783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:38:29.176980 1239783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:38:29.181694 1239783 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:29.181793 1239783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:38:29.229764 1239783 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:38:29.229886 1239783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:38:29.308290 1239783 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:38:29.297773156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:38:29.308409 1239783 docker.go:319] overlay module found
	I1108 10:38:29.312809 1239783 out.go:179] * Using the docker driver based on user configuration
	I1108 10:38:29.317629 1239783 start.go:309] selected driver: docker
	I1108 10:38:29.317654 1239783 start.go:930] validating driver "docker" against <nil>
	I1108 10:38:29.317704 1239783 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:38:29.318646 1239783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:38:29.453762 1239783 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:38:29.44074252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:38:29.453924 1239783 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:38:29.454166 1239783 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:38:29.459915 1239783 out.go:179] * Using Docker driver with root privileges
	I1108 10:38:29.463435 1239783 cni.go:84] Creating CNI manager for ""
	I1108 10:38:29.463502 1239783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:38:29.463510 1239783 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:38:29.463603 1239783 start.go:353] cluster config:
	{Name:auto-731120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-731120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1108 10:38:29.467093 1239783 out.go:179] * Starting "auto-731120" primary control-plane node in "auto-731120" cluster
	I1108 10:38:29.470281 1239783 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:38:29.473630 1239783 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:38:29.476672 1239783 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:29.476724 1239783 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:38:29.476735 1239783 cache.go:59] Caching tarball of preloaded images
	I1108 10:38:29.476820 1239783 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:38:29.476829 1239783 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:38:29.476937 1239783 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/config.json ...
	I1108 10:38:29.476953 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/config.json: {Name:mk193cca89a381ede09b2e13a126a53ce22bb603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:29.477082 1239783 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:38:29.497936 1239783 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:38:29.497954 1239783 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:38:29.497967 1239783 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:38:29.498001 1239783 start.go:360] acquireMachinesLock for auto-731120: {Name:mkd59fbb5cd3f8b291cfdf5c975f1abdf6be63da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:29.498093 1239783 start.go:364] duration metric: took 75.353µs to acquireMachinesLock for "auto-731120"
	I1108 10:38:29.498117 1239783 start.go:93] Provisioning new machine with config: &{Name:auto-731120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-731120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:38:29.498181 1239783 start.go:125] createHost starting for "" (driver="docker")
	W1108 10:38:29.414195 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:31.915864 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	I1108 10:38:29.502004 1239783 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:38:29.502239 1239783 start.go:159] libmachine.API.Create for "auto-731120" (driver="docker")
	I1108 10:38:29.502274 1239783 client.go:173] LocalClient.Create starting
	I1108 10:38:29.502386 1239783 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem
	I1108 10:38:29.502420 1239783 main.go:143] libmachine: Decoding PEM data...
	I1108 10:38:29.502434 1239783 main.go:143] libmachine: Parsing certificate...
	I1108 10:38:29.502493 1239783 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem
	I1108 10:38:29.502513 1239783 main.go:143] libmachine: Decoding PEM data...
	I1108 10:38:29.502523 1239783 main.go:143] libmachine: Parsing certificate...
	I1108 10:38:29.502867 1239783 cli_runner.go:164] Run: docker network inspect auto-731120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:38:29.530276 1239783 cli_runner.go:211] docker network inspect auto-731120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:38:29.530361 1239783 network_create.go:284] running [docker network inspect auto-731120] to gather additional debugging logs...
	I1108 10:38:29.530384 1239783 cli_runner.go:164] Run: docker network inspect auto-731120
	W1108 10:38:29.554073 1239783 cli_runner.go:211] docker network inspect auto-731120 returned with exit code 1
	I1108 10:38:29.554106 1239783 network_create.go:287] error running [docker network inspect auto-731120]: docker network inspect auto-731120: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-731120 not found
	I1108 10:38:29.554121 1239783 network_create.go:289] output of [docker network inspect auto-731120]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-731120 not found
	
	** /stderr **
	I1108 10:38:29.554230 1239783 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:38:29.574380 1239783 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f127b1978c3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:c7:37:65:8c:96} reservation:<nil>}
	I1108 10:38:29.574716 1239783 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b98bf73d2e94 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:99:be:46:ea:86} reservation:<nil>}
	I1108 10:38:29.575029 1239783 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c4df73992be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:ad:c1:c0:ea:6d} reservation:<nil>}
	I1108 10:38:29.575486 1239783 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f8690}
	I1108 10:38:29.575509 1239783 network_create.go:124] attempt to create docker network auto-731120 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 10:38:29.575566 1239783 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-731120 auto-731120
	I1108 10:38:29.639863 1239783 network_create.go:108] docker network auto-731120 192.168.76.0/24 created
	I1108 10:38:29.639902 1239783 kic.go:121] calculated static IP "192.168.76.2" for the "auto-731120" container
	I1108 10:38:29.639970 1239783 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:38:29.665631 1239783 cli_runner.go:164] Run: docker volume create auto-731120 --label name.minikube.sigs.k8s.io=auto-731120 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:38:29.685143 1239783 oci.go:103] Successfully created a docker volume auto-731120
	I1108 10:38:29.685225 1239783 cli_runner.go:164] Run: docker run --rm --name auto-731120-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-731120 --entrypoint /usr/bin/test -v auto-731120:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:38:30.557198 1239783 oci.go:107] Successfully prepared a docker volume auto-731120
	I1108 10:38:30.557244 1239783 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:30.557263 1239783 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:38:30.557326 1239783 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-731120:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 10:38:34.406702 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:36.407908 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:38.411812 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	I1108 10:38:35.911607 1239783 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-731120:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (5.354241102s)
	I1108 10:38:35.911638 1239783 kic.go:203] duration metric: took 5.354371011s to extract preloaded images to volume ...
	W1108 10:38:35.911777 1239783 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:38:35.911887 1239783 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:38:36.016820 1239783 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-731120 --name auto-731120 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-731120 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-731120 --network auto-731120 --ip 192.168.76.2 --volume auto-731120:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:38:36.502546 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Running}}
	I1108 10:38:36.527187 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:38:36.558696 1239783 cli_runner.go:164] Run: docker exec auto-731120 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:38:36.632342 1239783 oci.go:144] the created container "auto-731120" has a running status.
	I1108 10:38:36.632368 1239783 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa...
	I1108 10:38:37.358297 1239783 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:38:37.393702 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:38:37.428486 1239783 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:38:37.428504 1239783 kic_runner.go:114] Args: [docker exec --privileged auto-731120 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:38:37.492553 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:38:37.512828 1239783 machine.go:94] provisionDockerMachine start ...
	I1108 10:38:37.512919 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:37.533034 1239783 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:37.533360 1239783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1108 10:38:37.533376 1239783 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:38:37.534000 1239783 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41570->127.0.0.1:34557: read: connection reset by peer
	W1108 10:38:40.909662 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:43.411245 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	I1108 10:38:40.683878 1239783 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-731120
	
	I1108 10:38:40.683901 1239783 ubuntu.go:182] provisioning hostname "auto-731120"
	I1108 10:38:40.683969 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:40.701351 1239783 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:40.701665 1239783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1108 10:38:40.701681 1239783 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-731120 && echo "auto-731120" | sudo tee /etc/hostname
	I1108 10:38:40.865211 1239783 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-731120
	
	I1108 10:38:40.865289 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:40.882980 1239783 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:40.883285 1239783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1108 10:38:40.883305 1239783 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-731120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-731120/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-731120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:38:41.036652 1239783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:38:41.036679 1239783 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:38:41.036710 1239783 ubuntu.go:190] setting up certificates
	I1108 10:38:41.036728 1239783 provision.go:84] configureAuth start
	I1108 10:38:41.036791 1239783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-731120
	I1108 10:38:41.054020 1239783 provision.go:143] copyHostCerts
	I1108 10:38:41.054093 1239783 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:38:41.054108 1239783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:38:41.054193 1239783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:38:41.054318 1239783 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:38:41.054330 1239783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:38:41.054362 1239783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:38:41.054424 1239783 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:38:41.054433 1239783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:38:41.054458 1239783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:38:41.054537 1239783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.auto-731120 san=[127.0.0.1 192.168.76.2 auto-731120 localhost minikube]
	I1108 10:38:42.045745 1239783 provision.go:177] copyRemoteCerts
	I1108 10:38:42.045819 1239783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:38:42.045862 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.065339 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:38:42.204125 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:38:42.247789 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1108 10:38:42.272630 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:38:42.297222 1239783 provision.go:87] duration metric: took 1.260474925s to configureAuth
	I1108 10:38:42.297263 1239783 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:38:42.297474 1239783 config.go:182] Loaded profile config "auto-731120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:42.297614 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.317888 1239783 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:42.318314 1239783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1108 10:38:42.318332 1239783 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:38:42.614229 1239783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:38:42.614254 1239783 machine.go:97] duration metric: took 5.101407316s to provisionDockerMachine
	I1108 10:38:42.614264 1239783 client.go:176] duration metric: took 13.111983873s to LocalClient.Create
	I1108 10:38:42.614279 1239783 start.go:167] duration metric: took 13.112041783s to libmachine.API.Create "auto-731120"
	I1108 10:38:42.614286 1239783 start.go:293] postStartSetup for "auto-731120" (driver="docker")
	I1108 10:38:42.614296 1239783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:38:42.614371 1239783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:38:42.614421 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.643704 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:38:42.754757 1239783 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:38:42.759110 1239783 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:38:42.759137 1239783 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:38:42.759149 1239783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:38:42.759210 1239783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:38:42.759314 1239783 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:38:42.759436 1239783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:38:42.769771 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:42.787544 1239783 start.go:296] duration metric: took 173.241642ms for postStartSetup
	I1108 10:38:42.787928 1239783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-731120
	I1108 10:38:42.805510 1239783 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/config.json ...
	I1108 10:38:42.805795 1239783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:38:42.805849 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.822948 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:38:42.927147 1239783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:38:42.931748 1239783 start.go:128] duration metric: took 13.433551764s to createHost
	I1108 10:38:42.931778 1239783 start.go:83] releasing machines lock for "auto-731120", held for 13.433668322s
	I1108 10:38:42.931847 1239783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-731120
	I1108 10:38:42.948962 1239783 ssh_runner.go:195] Run: cat /version.json
	I1108 10:38:42.948996 1239783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:38:42.949017 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.949057 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.970812 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:38:42.976871 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:38:43.182187 1239783 ssh_runner.go:195] Run: systemctl --version
	I1108 10:38:43.188954 1239783 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:38:43.230103 1239783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:38:43.235439 1239783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:38:43.235508 1239783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:38:43.266356 1239783 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:38:43.266440 1239783 start.go:496] detecting cgroup driver to use...
	I1108 10:38:43.266561 1239783 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:38:43.266653 1239783 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:38:43.284845 1239783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:38:43.297604 1239783 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:38:43.297670 1239783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:38:43.316530 1239783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:38:43.336976 1239783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:38:43.466078 1239783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:38:43.600064 1239783 docker.go:234] disabling docker service ...
	I1108 10:38:43.600150 1239783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:38:43.624592 1239783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:38:43.638711 1239783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:38:43.759820 1239783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:38:43.880328 1239783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:38:43.893437 1239783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:38:43.919882 1239783 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:38:43.920011 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.930319 1239783 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:38:43.930437 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.939963 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.949168 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.961469 1239783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:38:43.970451 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.979979 1239783 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.993756 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:44.005740 1239783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:38:44.015679 1239783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:38:44.024121 1239783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:44.157504 1239783 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:38:44.299251 1239783 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:38:44.299343 1239783 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:38:44.303393 1239783 start.go:564] Will wait 60s for crictl version
	I1108 10:38:44.303486 1239783 ssh_runner.go:195] Run: which crictl
	I1108 10:38:44.307319 1239783 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:38:44.334541 1239783 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:38:44.334666 1239783 ssh_runner.go:195] Run: crio --version
	I1108 10:38:44.364774 1239783 ssh_runner.go:195] Run: crio --version
	I1108 10:38:44.394846 1239783 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:38:44.397798 1239783 cli_runner.go:164] Run: docker network inspect auto-731120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:38:44.415922 1239783 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:38:44.419628 1239783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:44.431226 1239783 kubeadm.go:884] updating cluster {Name:auto-731120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-731120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:38:44.431333 1239783 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:44.431389 1239783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:38:44.463899 1239783 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:38:44.463925 1239783 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:38:44.463978 1239783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:38:44.492050 1239783 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:38:44.492079 1239783 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:38:44.492087 1239783 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:38:44.492174 1239783 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-731120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-731120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:38:44.492259 1239783 ssh_runner.go:195] Run: crio config
	I1108 10:38:44.553518 1239783 cni.go:84] Creating CNI manager for ""
	I1108 10:38:44.553554 1239783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:38:44.553572 1239783 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:38:44.553596 1239783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-731120 NodeName:auto-731120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:38:44.553741 1239783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-731120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:38:44.553828 1239783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:38:44.562644 1239783 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:38:44.562737 1239783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:38:44.570477 1239783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1108 10:38:44.583435 1239783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:38:44.596910 1239783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1108 10:38:44.609738 1239783 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:38:44.613384 1239783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:44.623370 1239783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:44.742168 1239783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:38:44.764734 1239783 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120 for IP: 192.168.76.2
	I1108 10:38:44.764757 1239783 certs.go:195] generating shared ca certs ...
	I1108 10:38:44.764774 1239783 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:44.764980 1239783 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:38:44.765042 1239783 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:38:44.765054 1239783 certs.go:257] generating profile certs ...
	I1108 10:38:44.765125 1239783 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.key
	I1108 10:38:44.765145 1239783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt with IP's: []
	I1108 10:38:45.390556 1239783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt ...
	I1108 10:38:45.390586 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt: {Name:mk746bfa66833d669f1861e9ec5e0248f18da719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:45.390937 1239783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.key ...
	I1108 10:38:45.390956 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.key: {Name:mka2b36c93d2d4ba6913e777534652f1fb328644 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:45.391154 1239783 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key.5766770b
	I1108 10:38:45.391176 1239783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt.5766770b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 10:38:45.830250 1239783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt.5766770b ...
	I1108 10:38:45.830279 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt.5766770b: {Name:mk2bffa3e0b315142911c7c424a2d666b8827f84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:45.830468 1239783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key.5766770b ...
	I1108 10:38:45.830481 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key.5766770b: {Name:mk19c6ef73a4835ab27470811610f495176dd59b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:45.830569 1239783 certs.go:382] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt.5766770b -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt
	I1108 10:38:45.830658 1239783 certs.go:386] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key.5766770b -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key
	I1108 10:38:45.830722 1239783 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.key
	I1108 10:38:45.830740 1239783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.crt with IP's: []
	I1108 10:38:46.647689 1239783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.crt ...
	I1108 10:38:46.647723 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.crt: {Name:mk7d0f3f3799428da70d615ebae19f4feee14096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:46.647919 1239783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.key ...
	I1108 10:38:46.647930 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.key: {Name:mk710f6012005d54fc176370624632be88a68964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:46.648114 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:38:46.648157 1239783 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:38:46.648166 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:38:46.648193 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:38:46.648223 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:38:46.648248 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:38:46.648294 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:46.648926 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:38:46.670981 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:38:46.691272 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:38:46.709536 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:38:46.727738 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1108 10:38:46.746464 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:38:46.764400 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:38:46.781764 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:38:46.801433 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:38:46.820193 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:38:46.837682 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:38:46.861922 1239783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:38:46.875167 1239783 ssh_runner.go:195] Run: openssl version
	I1108 10:38:46.881502 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:38:46.890165 1239783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:38:46.896784 1239783 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:38:46.896857 1239783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:38:46.944345 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:38:46.959219 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:38:46.968013 1239783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:38:46.971797 1239783 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:38:46.971859 1239783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:38:47.013417 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:38:47.022286 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:38:47.031384 1239783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:47.035194 1239783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:47.035267 1239783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:47.077139 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:38:47.085476 1239783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:38:47.092926 1239783 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:38:47.092987 1239783 kubeadm.go:401] StartCluster: {Name:auto-731120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-731120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:38:47.093065 1239783 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:38:47.093128 1239783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:38:47.126995 1239783 cri.go:89] found id: ""
	I1108 10:38:47.127080 1239783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:38:47.141543 1239783 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:38:47.151022 1239783 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:38:47.151091 1239783 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:38:47.166481 1239783 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:38:47.166503 1239783 kubeadm.go:158] found existing configuration files:
	
	I1108 10:38:47.166560 1239783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:38:47.174269 1239783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:38:47.174343 1239783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:38:47.182226 1239783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:38:47.190257 1239783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:38:47.190321 1239783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:38:47.197673 1239783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:38:47.205350 1239783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:38:47.205418 1239783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:38:47.213413 1239783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:38:47.221531 1239783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:38:47.221617 1239783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:38:47.229359 1239783 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:38:47.271780 1239783 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:38:47.272104 1239783 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:38:47.295144 1239783 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:38:47.295223 1239783 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:38:47.295266 1239783 kubeadm.go:319] OS: Linux
	I1108 10:38:47.295317 1239783 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:38:47.295373 1239783 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:38:47.295426 1239783 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:38:47.295480 1239783 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:38:47.295534 1239783 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:38:47.295588 1239783 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:38:47.295639 1239783 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:38:47.295692 1239783 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:38:47.295744 1239783 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:38:47.374593 1239783 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:38:47.374725 1239783 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:38:47.374836 1239783 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:38:47.382990 1239783 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1108 10:38:45.921295 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:48.407724 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	I1108 10:38:47.389126 1239783 out.go:252]   - Generating certificates and keys ...
	I1108 10:38:47.389273 1239783 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:38:47.389393 1239783 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:38:47.622865 1239783 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:38:47.815472 1239783 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:38:48.056313 1239783 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:38:48.721769 1239783 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1108 10:38:50.408796 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:52.922218 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	I1108 10:38:49.198183 1239783 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:38:49.198450 1239783 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-731120 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:38:49.625642 1239783 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:38:49.626031 1239783 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-731120 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:38:50.411992 1239783 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:38:51.073598 1239783 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:38:51.148481 1239783 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:38:51.149075 1239783 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:38:51.488824 1239783 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:38:53.193718 1239783 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:38:53.536315 1239783 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:38:54.482269 1239783 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:38:54.818390 1239783 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:38:54.819114 1239783 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:38:54.822251 1239783 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:38:54.910595 1235505 pod_ready.go:94] pod "coredns-66bc5c9577-nvtlg" is "Ready"
	I1108 10:38:54.910633 1235505 pod_ready.go:86] duration metric: took 32.008905809s for pod "coredns-66bc5c9577-nvtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:54.914461 1235505 pod_ready.go:83] waiting for pod "etcd-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:54.921754 1235505 pod_ready.go:94] pod "etcd-no-preload-291044" is "Ready"
	I1108 10:38:54.921784 1235505 pod_ready.go:86] duration metric: took 7.292889ms for pod "etcd-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:54.924622 1235505 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:54.930277 1235505 pod_ready.go:94] pod "kube-apiserver-no-preload-291044" is "Ready"
	I1108 10:38:54.930312 1235505 pod_ready.go:86] duration metric: took 5.659846ms for pod "kube-apiserver-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:54.935021 1235505 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:55.107171 1235505 pod_ready.go:94] pod "kube-controller-manager-no-preload-291044" is "Ready"
	I1108 10:38:55.107202 1235505 pod_ready.go:86] duration metric: took 172.154181ms for pod "kube-controller-manager-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:55.306165 1235505 pod_ready.go:83] waiting for pod "kube-proxy-2m8tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:55.706872 1235505 pod_ready.go:94] pod "kube-proxy-2m8tx" is "Ready"
	I1108 10:38:55.706898 1235505 pod_ready.go:86] duration metric: took 400.659794ms for pod "kube-proxy-2m8tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:55.908353 1235505 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:56.305397 1235505 pod_ready.go:94] pod "kube-scheduler-no-preload-291044" is "Ready"
	I1108 10:38:56.305421 1235505 pod_ready.go:86] duration metric: took 397.042761ms for pod "kube-scheduler-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:56.305434 1235505 pod_ready.go:40] duration metric: took 33.407684709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:38:56.372731 1235505 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:38:56.376058 1235505 out.go:179] * Done! kubectl is now configured to use "no-preload-291044" cluster and "default" namespace by default
	I1108 10:38:54.825543 1239783 out.go:252]   - Booting up control plane ...
	I1108 10:38:54.825666 1239783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:38:54.825753 1239783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:38:54.827493 1239783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:38:54.844923 1239783 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:38:54.845229 1239783 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:38:54.855684 1239783 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:38:54.855789 1239783 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:38:54.855840 1239783 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:38:55.035134 1239783 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:38:55.035281 1239783 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:38:56.540776 1239783 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501756872s
	I1108 10:38:56.540898 1239783 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:38:56.540984 1239783 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 10:38:56.541077 1239783 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:38:56.541159 1239783 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 10:38:59.247935 1239783 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.707495775s
	I1108 10:39:03.442533 1239783 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.902469886s
	I1108 10:39:04.043377 1239783 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.50312217s
	I1108 10:39:04.063374 1239783 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:39:04.081656 1239783 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:39:04.096957 1239783 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:39:04.097158 1239783 kubeadm.go:319] [mark-control-plane] Marking the node auto-731120 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:39:04.109688 1239783 kubeadm.go:319] [bootstrap-token] Using token: c9kz4p.88phkg0unv6h2q55
	I1108 10:39:04.112569 1239783 out.go:252]   - Configuring RBAC rules ...
	I1108 10:39:04.112698 1239783 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:39:04.117482 1239783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:39:04.127385 1239783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:39:04.131444 1239783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:39:04.137923 1239783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:39:04.143268 1239783 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:39:04.455025 1239783 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:39:04.888412 1239783 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:39:05.450963 1239783 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:39:05.452143 1239783 kubeadm.go:319] 
	I1108 10:39:05.452223 1239783 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:39:05.452230 1239783 kubeadm.go:319] 
	I1108 10:39:05.452306 1239783 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:39:05.452311 1239783 kubeadm.go:319] 
	I1108 10:39:05.452336 1239783 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:39:05.452395 1239783 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:39:05.452494 1239783 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:39:05.452501 1239783 kubeadm.go:319] 
	I1108 10:39:05.452563 1239783 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:39:05.452575 1239783 kubeadm.go:319] 
	I1108 10:39:05.452622 1239783 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:39:05.452626 1239783 kubeadm.go:319] 
	I1108 10:39:05.452676 1239783 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:39:05.452750 1239783 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:39:05.452817 1239783 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:39:05.452821 1239783 kubeadm.go:319] 
	I1108 10:39:05.452904 1239783 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:39:05.452979 1239783 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:39:05.452983 1239783 kubeadm.go:319] 
	I1108 10:39:05.453066 1239783 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c9kz4p.88phkg0unv6h2q55 \
	I1108 10:39:05.453167 1239783 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 \
	I1108 10:39:05.453187 1239783 kubeadm.go:319] 	--control-plane 
	I1108 10:39:05.453192 1239783 kubeadm.go:319] 
	I1108 10:39:05.453275 1239783 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:39:05.453279 1239783 kubeadm.go:319] 
	I1108 10:39:05.453359 1239783 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c9kz4p.88phkg0unv6h2q55 \
	I1108 10:39:05.453461 1239783 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 
	I1108 10:39:05.458462 1239783 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:39:05.458706 1239783 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:39:05.458810 1239783 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 10:39:05.458828 1239783 cni.go:84] Creating CNI manager for ""
	I1108 10:39:05.458835 1239783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:39:05.461961 1239783 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 10:39:05.464978 1239783 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:39:05.470625 1239783 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 10:39:05.470651 1239783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:39:05.486663 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:39:06.226374 1239783 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:39:06.226506 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:06.226600 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-731120 minikube.k8s.io/updated_at=2025_11_08T10_39_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=auto-731120 minikube.k8s.io/primary=true
	I1108 10:39:06.411461 1239783 ops.go:34] apiserver oom_adj: -16
	I1108 10:39:06.411607 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:06.911740 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:07.411669 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:07.912598 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:08.411896 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:08.912228 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:09.411649 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:09.912196 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:10.411680 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:10.613371 1239783 kubeadm.go:1114] duration metric: took 4.386909583s to wait for elevateKubeSystemPrivileges
	I1108 10:39:10.613402 1239783 kubeadm.go:403] duration metric: took 23.520417897s to StartCluster
	I1108 10:39:10.613420 1239783 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:39:10.613476 1239783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:39:10.614484 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:39:10.614695 1239783 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:39:10.614803 1239783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:39:10.615049 1239783 config.go:182] Loaded profile config "auto-731120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:39:10.615079 1239783 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:39:10.615137 1239783 addons.go:70] Setting storage-provisioner=true in profile "auto-731120"
	I1108 10:39:10.615151 1239783 addons.go:239] Setting addon storage-provisioner=true in "auto-731120"
	I1108 10:39:10.615172 1239783 host.go:66] Checking if "auto-731120" exists ...
	I1108 10:39:10.615889 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:39:10.616125 1239783 addons.go:70] Setting default-storageclass=true in profile "auto-731120"
	I1108 10:39:10.616162 1239783 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-731120"
	I1108 10:39:10.616473 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:39:10.620559 1239783 out.go:179] * Verifying Kubernetes components...
	I1108 10:39:10.634395 1239783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:39:10.659830 1239783 addons.go:239] Setting addon default-storageclass=true in "auto-731120"
	I1108 10:39:10.659875 1239783 host.go:66] Checking if "auto-731120" exists ...
	I1108 10:39:10.660327 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:39:10.680085 1239783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 08 10:38:49 no-preload-291044 crio[651]: time="2025-11-08T10:38:49.815590692Z" level=info msg="Removed container 336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk/dashboard-metrics-scraper" id=bd8630be-163e-4538-b92e-7debf78dca15 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:38:52 no-preload-291044 conmon[1134]: conmon d1dbd6cc1f1dc5794d5f <ninfo>: container 1137 exited with status 1
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.808188556Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d18eb3e9-f68f-40ae-a4fa-313bf0f281d0 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.809639509Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1d3e3876-bf7d-46c1-a038-fcc3d0306751 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.812131447Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=79a74312-d2a9-4a60-bf49-1566930ccb06 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.812225082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.819914332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.820890131Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/14f5f7d0215035c3e8346a2ea360ddfc58fdad0b1f748cf6426083e1955fbe37/merged/etc/passwd: no such file or directory"
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.821001709Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/14f5f7d0215035c3e8346a2ea360ddfc58fdad0b1f748cf6426083e1955fbe37/merged/etc/group: no such file or directory"
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.821760817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.851822438Z" level=info msg="Created container 65cbe2bb9985bf3d82c006541771b098511632bf16f3207681bdffd6065d3a5a: kube-system/storage-provisioner/storage-provisioner" id=79a74312-d2a9-4a60-bf49-1566930ccb06 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.857441078Z" level=info msg="Starting container: 65cbe2bb9985bf3d82c006541771b098511632bf16f3207681bdffd6065d3a5a" id=3e033464-6f33-4bc5-b9c3-476f66f731d0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.860973167Z" level=info msg="Started container" PID=1621 containerID=65cbe2bb9985bf3d82c006541771b098511632bf16f3207681bdffd6065d3a5a description=kube-system/storage-provisioner/storage-provisioner id=3e033464-6f33-4bc5-b9c3-476f66f731d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2bc67ae4df7ebb08d374e7a97a00aac70c4a9b06878f40b7c9d342d24e127b8
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.617223398Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.624956488Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.625120339Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.625208156Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.63055204Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.630707284Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.630788677Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.63414514Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.634304906Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.634391402Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.640698506Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.640853201Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	65cbe2bb9985b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           20 seconds ago      Running             storage-provisioner         2                   c2bc67ae4df7e       storage-provisioner                          kube-system
	9ae590763b2f2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   2236741355d59       dashboard-metrics-scraper-6ffb444bf9-h2xvk   kubernetes-dashboard
	6c6e800b9b138       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago      Running             kubernetes-dashboard        0                   69067acda16ce       kubernetes-dashboard-855c9754f9-rttff        kubernetes-dashboard
	6c5c96793404d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   0d37978905194       busybox                                      default
	c33eeb214e958       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   64c93bd624185       coredns-66bc5c9577-nvtlg                     kube-system
	ab334d5bd7ba7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   436584c46788a       kindnet-nct2b                                kube-system
	d1dbd6cc1f1dc       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           50 seconds ago      Exited              storage-provisioner         1                   c2bc67ae4df7e       storage-provisioner                          kube-system
	5b59e1565e30b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   88e7c62316202       kube-proxy-2m8tx                             kube-system
	5ff011c39fa1a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   e8b6237022914       kube-controller-manager-no-preload-291044    kube-system
	fef0c37718a66       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   d6d862e4720f1       kube-scheduler-no-preload-291044             kube-system
	99b5f6a837326       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   92531495aa672       kube-apiserver-no-preload-291044             kube-system
	daf6ee479a7ca       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   2360825bca4c3       etcd-no-preload-291044                       kube-system
	
	
	==> coredns [c33eeb214e958294220dbe340086eab0da97ee59bafe81bc2bc509133f4b77b0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54796 - 52919 "HINFO IN 8601295041522844400.5362363667463070056. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013066141s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-291044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-291044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=no-preload-291044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_37_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-291044
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:39:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:38:51 +0000   Sat, 08 Nov 2025 10:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:38:51 +0000   Sat, 08 Nov 2025 10:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:38:51 +0000   Sat, 08 Nov 2025 10:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:38:51 +0000   Sat, 08 Nov 2025 10:37:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-291044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                53ced70c-1627-4fc9-9eaa-b752fd9e6d98
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-nvtlg                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-no-preload-291044                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-nct2b                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-no-preload-291044              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-no-preload-291044     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-2m8tx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-no-preload-291044              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-h2xvk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rttff         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 114s                   kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Warning  CgroupV1                 2m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node no-preload-291044 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node no-preload-291044 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node no-preload-291044 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m                     kubelet          Node no-preload-291044 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m                     kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m                     kubelet          Node no-preload-291044 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m                     kubelet          Node no-preload-291044 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                     kubelet          Starting kubelet.
	  Normal   RegisteredNode           116s                   node-controller  Node no-preload-291044 event: Registered Node no-preload-291044 in Controller
	  Normal   NodeReady                99s                    kubelet          Node no-preload-291044 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node no-preload-291044 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node no-preload-291044 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node no-preload-291044 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node no-preload-291044 event: Registered Node no-preload-291044 in Controller
	
	
	==> dmesg <==
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:36] overlayfs: idmapped layers are currently not supported
	[ +30.788294] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:38] overlayfs: idmapped layers are currently not supported
	[  +6.100629] overlayfs: idmapped layers are currently not supported
	[ +43.651730] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [daf6ee479a7cae60eb0974a556bff3ab215747a99f91f962708a80a61d9ba6f5] <==
	{"level":"warn","ts":"2025-11-08T10:38:18.254800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.271117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.312905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.381677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.439127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.482882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.496504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.541922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.650007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.751182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.809517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.840402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.855201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.892183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.937836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.980259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.034171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.073335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.131930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.176764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.259724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.285903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.322035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.346089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.439074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:39:13 up  9:21,  0 user,  load average: 6.13, 4.73, 3.54
	Linux no-preload-291044 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ab334d5bd7ba72aea7af822ddc9751317c502a94c09a2740bcae5d2371922e43] <==
	I1108 10:38:22.330304       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:38:22.330527       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:38:22.331159       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:38:22.331175       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:38:22.331186       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:38:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:38:22.615770       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:38:22.615794       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:38:22.615803       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:38:22.616580       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:38:52.616343       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:38:52.616413       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:38:52.616560       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:38:52.616639       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:38:53.716758       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:38:53.716886       1 metrics.go:72] Registering metrics
	I1108 10:38:53.716993       1 controller.go:711] "Syncing nftables rules"
	I1108 10:39:02.616218       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:39:02.616304       1 main.go:301] handling current node
	I1108 10:39:12.616034       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:39:12.616062       1 main.go:301] handling current node
	
	
	==> kube-apiserver [99b5f6a8373260a1fb2a88d8f9ff8805d70fb0e4e09b4e2bea1c955d090e83a3] <==
	I1108 10:38:20.615273       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:38:20.635659       1 cache.go:39] Caches are synced for autoregister controller
	E1108 10:38:20.644136       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:38:20.661955       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:38:20.673502       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:38:20.673529       1 policy_source.go:240] refreshing policies
	I1108 10:38:20.687124       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:38:20.703620       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:38:20.717470       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:38:20.717622       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:38:20.728284       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:38:20.721316       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:38:20.721330       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:38:20.733080       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:38:21.328838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:38:21.426396       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:38:21.451690       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:38:21.649989       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:38:21.826210       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:38:21.941874       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:38:22.379799       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.59.26"}
	I1108 10:38:22.429119       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.201.122"}
	I1108 10:38:25.128119       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:38:25.426882       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:38:25.608302       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5ff011c39fa1a4e6ccf1602407612d6fd09adb5c8853548d45cbc57693896266] <==
	I1108 10:38:25.004333       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:38:25.007823       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:38:25.010149       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:38:25.016685       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:38:25.017277       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 10:38:25.018359       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:38:25.018453       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:38:25.019756       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:38:25.019881       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:38:25.021222       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 10:38:25.022364       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:38:25.026902       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:38:25.028067       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:38:25.029335       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:38:25.037719       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 10:38:25.037856       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 10:38:25.037945       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:38:25.037995       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 10:38:25.038057       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 10:38:25.038086       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 10:38:25.041377       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:38:25.041412       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:38:25.041419       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:38:25.048920       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:38:25.054908       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [5b59e1565e30b8151a42f654301131ca5a9b85a2c6f83767a903111bd6f7c44b] <==
	I1108 10:38:22.563252       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:38:22.806480       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:38:22.906643       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:38:22.906703       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:38:22.906836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:38:22.935640       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:38:22.935695       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:38:22.942328       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:38:22.942614       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:38:22.942639       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:38:22.948589       1 config.go:200] "Starting service config controller"
	I1108 10:38:22.948614       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:38:22.948632       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:38:22.948636       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:38:22.948648       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:38:22.948652       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:38:22.949252       1 config.go:309] "Starting node config controller"
	I1108 10:38:22.949269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:38:22.949275       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:38:23.048753       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:38:23.048791       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 10:38:23.048859       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fef0c37718a669a3a308b4a0ee7aa3629f5c411a3f86070c7497fead7a730494] <==
	I1108 10:38:18.546463       1 serving.go:386] Generated self-signed cert in-memory
	W1108 10:38:20.428835       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 10:38:20.429773       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 10:38:20.429846       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 10:38:20.429879       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 10:38:20.626094       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:38:20.626128       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:38:20.655108       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:38:20.655262       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:38:20.685853       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:38:20.685884       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:38:20.786335       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:38:25 no-preload-291044 kubelet[766]: I1108 10:38:25.748466     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r684k\" (UniqueName: \"kubernetes.io/projected/a722ea55-9e8c-4c23-aa7f-ad48c06d67ec-kube-api-access-r684k\") pod \"kubernetes-dashboard-855c9754f9-rttff\" (UID: \"a722ea55-9e8c-4c23-aa7f-ad48c06d67ec\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rttff"
	Nov 08 10:38:25 no-preload-291044 kubelet[766]: I1108 10:38:25.748534     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a722ea55-9e8c-4c23-aa7f-ad48c06d67ec-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rttff\" (UID: \"a722ea55-9e8c-4c23-aa7f-ad48c06d67ec\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rttff"
	Nov 08 10:38:25 no-preload-291044 kubelet[766]: W1108 10:38:25.869694     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/crio-2236741355d59f0eac33adda0ece88f8b2edcb0fe453f29cefe02adec2d7beb6 WatchSource:0}: Error finding container 2236741355d59f0eac33adda0ece88f8b2edcb0fe453f29cefe02adec2d7beb6: Status 404 returned error can't find the container with id 2236741355d59f0eac33adda0ece88f8b2edcb0fe453f29cefe02adec2d7beb6
	Nov 08 10:38:26 no-preload-291044 kubelet[766]: W1108 10:38:26.221424     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/crio-69067acda16ce370940915e0221591850a8363254affeb8c0a42726323a59089 WatchSource:0}: Error finding container 69067acda16ce370940915e0221591850a8363254affeb8c0a42726323a59089: Status 404 returned error can't find the container with id 69067acda16ce370940915e0221591850a8363254affeb8c0a42726323a59089
	Nov 08 10:38:31 no-preload-291044 kubelet[766]: I1108 10:38:31.716767     766 scope.go:117] "RemoveContainer" containerID="15b033f5b8d903e7496d87ae6cc76ccbfcc3a2882024b2ce2e5eff819d2a1545"
	Nov 08 10:38:32 no-preload-291044 kubelet[766]: I1108 10:38:32.735846     766 scope.go:117] "RemoveContainer" containerID="336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0"
	Nov 08 10:38:32 no-preload-291044 kubelet[766]: E1108 10:38:32.736495     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:38:32 no-preload-291044 kubelet[766]: I1108 10:38:32.738475     766 scope.go:117] "RemoveContainer" containerID="15b033f5b8d903e7496d87ae6cc76ccbfcc3a2882024b2ce2e5eff819d2a1545"
	Nov 08 10:38:33 no-preload-291044 kubelet[766]: I1108 10:38:33.744670     766 scope.go:117] "RemoveContainer" containerID="336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0"
	Nov 08 10:38:33 no-preload-291044 kubelet[766]: E1108 10:38:33.744825     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:38:35 no-preload-291044 kubelet[766]: I1108 10:38:35.847406     766 scope.go:117] "RemoveContainer" containerID="336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0"
	Nov 08 10:38:35 no-preload-291044 kubelet[766]: E1108 10:38:35.847584     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:38:49 no-preload-291044 kubelet[766]: I1108 10:38:49.426711     766 scope.go:117] "RemoveContainer" containerID="336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0"
	Nov 08 10:38:49 no-preload-291044 kubelet[766]: I1108 10:38:49.795927     766 scope.go:117] "RemoveContainer" containerID="336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0"
	Nov 08 10:38:49 no-preload-291044 kubelet[766]: I1108 10:38:49.796222     766 scope.go:117] "RemoveContainer" containerID="9ae590763b2f2cda1d610cc1f78b2ea77114a7740040e661837d6264d55fa642"
	Nov 08 10:38:49 no-preload-291044 kubelet[766]: E1108 10:38:49.796371     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:38:49 no-preload-291044 kubelet[766]: I1108 10:38:49.825848     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rttff" podStartSLOduration=12.986437713 podStartE2EDuration="24.825831079s" podCreationTimestamp="2025-11-08 10:38:25 +0000 UTC" firstStartedPulling="2025-11-08 10:38:26.225114336 +0000 UTC m=+14.103300973" lastFinishedPulling="2025-11-08 10:38:38.064507694 +0000 UTC m=+25.942694339" observedRunningTime="2025-11-08 10:38:38.784778817 +0000 UTC m=+26.662965462" watchObservedRunningTime="2025-11-08 10:38:49.825831079 +0000 UTC m=+37.704017724"
	Nov 08 10:38:52 no-preload-291044 kubelet[766]: I1108 10:38:52.807209     766 scope.go:117] "RemoveContainer" containerID="d1dbd6cc1f1dc5794d5f0bdb0bec35359fb7abbfb462e4a28128e68598c92cad"
	Nov 08 10:38:55 no-preload-291044 kubelet[766]: I1108 10:38:55.847616     766 scope.go:117] "RemoveContainer" containerID="9ae590763b2f2cda1d610cc1f78b2ea77114a7740040e661837d6264d55fa642"
	Nov 08 10:38:55 no-preload-291044 kubelet[766]: E1108 10:38:55.847814     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:39:08 no-preload-291044 kubelet[766]: I1108 10:39:08.426475     766 scope.go:117] "RemoveContainer" containerID="9ae590763b2f2cda1d610cc1f78b2ea77114a7740040e661837d6264d55fa642"
	Nov 08 10:39:08 no-preload-291044 kubelet[766]: E1108 10:39:08.427117     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:39:09 no-preload-291044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:39:09 no-preload-291044 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:39:09 no-preload-291044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6c6e800b9b138a613ccf880559f7dab5ee4100ad4b76378594c6f7fa68a7d4af] <==
	2025/11/08 10:38:38 Using namespace: kubernetes-dashboard
	2025/11/08 10:38:38 Using in-cluster config to connect to apiserver
	2025/11/08 10:38:38 Using secret token for csrf signing
	2025/11/08 10:38:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:38:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:38:38 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:38:38 Generating JWE encryption key
	2025/11/08 10:38:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:38:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:38:38 Initializing JWE encryption key from synchronized object
	2025/11/08 10:38:38 Creating in-cluster Sidecar client
	2025/11/08 10:38:38 Serving insecurely on HTTP port: 9090
	2025/11/08 10:38:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:39:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:38:38 Starting overwatch
	
	
	==> storage-provisioner [65cbe2bb9985bf3d82c006541771b098511632bf16f3207681bdffd6065d3a5a] <==
	I1108 10:38:52.891317       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:38:52.936432       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:38:52.936585       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:38:52.939835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:38:56.394710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:00.655380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:04.253581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:07.307198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:10.330693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:10.339733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:39:10.339896       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:39:10.344122       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62d2386e-59b0-4bb3-9886-de4d8f35e247", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-291044_aff3036a-f182-48f7-9ca0-b1e7e39ad7cc became leader
	W1108 10:39:10.348955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:39:10.349193       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-291044_aff3036a-f182-48f7-9ca0-b1e7e39ad7cc!
	W1108 10:39:10.370770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:39:10.449742       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-291044_aff3036a-f182-48f7-9ca0-b1e7e39ad7cc!
	W1108 10:39:12.373673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:12.383770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d1dbd6cc1f1dc5794d5f0bdb0bec35359fb7abbfb462e4a28128e68598c92cad] <==
	I1108 10:38:22.480783       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:38:52.482916       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-291044 -n no-preload-291044
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-291044 -n no-preload-291044: exit status 2 (380.122122ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-291044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-291044
helpers_test.go:243: (dbg) docker inspect no-preload-291044:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a",
	        "Created": "2025-11-08T10:36:27.945864714Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1235674,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T10:38:03.785005639Z",
	            "FinishedAt": "2025-11-08T10:38:02.824539639Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/hostname",
	        "HostsPath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/hosts",
	        "LogPath": "/var/lib/docker/containers/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a-json.log",
	        "Name": "/no-preload-291044",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-291044:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-291044",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a",
	                "LowerDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716-init/diff:/var/lib/docker/overlay2/b684067a8299c84dec5096f63c285c198df1294fa80d656887e41842a3eb2948/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4697ebe29aa4c658be06f241ad0b28d2d8884c82f982891f3daff5359fb75716/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-291044",
	                "Source": "/var/lib/docker/volumes/no-preload-291044/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-291044",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-291044",
	                "name.minikube.sigs.k8s.io": "no-preload-291044",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "104edd42bdd92bee327412227bf59e111db6cecbfa395faf1287a2085a42f70d",
	            "SandboxKey": "/var/run/docker/netns/104edd42bdd9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34552"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34553"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34556"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34554"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34555"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-291044": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:ae:6f:a2:3e:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "15d9ca830af40cf01657fa03afa3cf3bcbb4c14b9a6b5c8dfc90bca89de4ebc4",
	                    "EndpointID": "8442ee0cc2e5f378efe33e4537de30eece01257c82228a7ae3e104e55606d85d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-291044",
	                        "4dafcc75ae9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-291044 -n no-preload-291044
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-291044 -n no-preload-291044: exit status 2 (365.823269ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-291044 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-291044 logs -n 25: (1.298500272s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-236075 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p default-k8s-diff-port-236075                                                                                                                                                                                                               │ default-k8s-diff-port-236075 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-553553                                                                                                                                                                                                               │ disable-driver-mounts-553553 │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:36 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:36 UTC │ 08 Nov 25 10:37 UTC │
	│ image   │ embed-certs-790346 image list --format=json                                                                                                                                                                                                   │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-790346 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-790346                                                                                                                                                                                                                         │ embed-certs-790346           │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-291044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ stop    │ -p no-preload-291044 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p newest-cni-515571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │                     │
	│ stop    │ -p newest-cni-515571 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-515571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:37 UTC │ 08 Nov 25 10:38 UTC │
	│ addons  │ enable dashboard -p no-preload-291044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ start   │ -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ image   │ newest-cni-515571 image list --format=json                                                                                                                                                                                                    │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-515571 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │                     │
	│ delete  │ -p newest-cni-515571                                                                                                                                                                                                                          │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ delete  │ -p newest-cni-515571                                                                                                                                                                                                                          │ newest-cni-515571            │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │ 08 Nov 25 10:38 UTC │
	│ start   │ -p auto-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-731120                  │ jenkins │ v1.37.0 │ 08 Nov 25 10:38 UTC │                     │
	│ image   │ no-preload-291044 image list --format=json                                                                                                                                                                                                    │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:39 UTC │ 08 Nov 25 10:39 UTC │
	│ pause   │ -p no-preload-291044 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-291044            │ jenkins │ v1.37.0 │ 08 Nov 25 10:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 10:38:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 10:38:29.139048 1239783 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:38:29.139359 1239783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:38:29.139392 1239783 out.go:374] Setting ErrFile to fd 2...
	I1108 10:38:29.139411 1239783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:38:29.139713 1239783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:38:29.140171 1239783 out.go:368] Setting JSON to false
	I1108 10:38:29.141406 1239783 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33655,"bootTime":1762564655,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:38:29.141503 1239783 start.go:143] virtualization:  
	I1108 10:38:29.149275 1239783 out.go:179] * [auto-731120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:38:29.153549 1239783 notify.go:221] Checking for updates...
	I1108 10:38:29.158325 1239783 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:38:29.161841 1239783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:38:29.165487 1239783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:38:29.168385 1239783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:38:29.172710 1239783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:38:29.176980 1239783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:38:29.181694 1239783 config.go:182] Loaded profile config "no-preload-291044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:29.181793 1239783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:38:29.229764 1239783 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:38:29.229886 1239783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:38:29.308290 1239783 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:38:29.297773156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:38:29.308409 1239783 docker.go:319] overlay module found
	I1108 10:38:29.312809 1239783 out.go:179] * Using the docker driver based on user configuration
	I1108 10:38:29.317629 1239783 start.go:309] selected driver: docker
	I1108 10:38:29.317654 1239783 start.go:930] validating driver "docker" against <nil>
	I1108 10:38:29.317704 1239783 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:38:29.318646 1239783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:38:29.453762 1239783 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-08 10:38:29.44074252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:38:29.453924 1239783 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 10:38:29.454166 1239783 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 10:38:29.459915 1239783 out.go:179] * Using Docker driver with root privileges
	I1108 10:38:29.463435 1239783 cni.go:84] Creating CNI manager for ""
	I1108 10:38:29.463502 1239783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:38:29.463510 1239783 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 10:38:29.463603 1239783 start.go:353] cluster config:
	{Name:auto-731120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-731120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1108 10:38:29.467093 1239783 out.go:179] * Starting "auto-731120" primary control-plane node in "auto-731120" cluster
	I1108 10:38:29.470281 1239783 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 10:38:29.473630 1239783 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 10:38:29.476672 1239783 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:29.476724 1239783 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1108 10:38:29.476735 1239783 cache.go:59] Caching tarball of preloaded images
	I1108 10:38:29.476820 1239783 preload.go:233] Found /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1108 10:38:29.476829 1239783 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 10:38:29.476937 1239783 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/config.json ...
	I1108 10:38:29.476953 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/config.json: {Name:mk193cca89a381ede09b2e13a126a53ce22bb603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:29.477082 1239783 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 10:38:29.497936 1239783 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 10:38:29.497954 1239783 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 10:38:29.497967 1239783 cache.go:233] Successfully downloaded all kic artifacts
	I1108 10:38:29.498001 1239783 start.go:360] acquireMachinesLock for auto-731120: {Name:mkd59fbb5cd3f8b291cfdf5c975f1abdf6be63da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 10:38:29.498093 1239783 start.go:364] duration metric: took 75.353µs to acquireMachinesLock for "auto-731120"
	I1108 10:38:29.498117 1239783 start.go:93] Provisioning new machine with config: &{Name:auto-731120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-731120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:38:29.498181 1239783 start.go:125] createHost starting for "" (driver="docker")
	W1108 10:38:29.414195 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:31.915864 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	I1108 10:38:29.502004 1239783 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 10:38:29.502239 1239783 start.go:159] libmachine.API.Create for "auto-731120" (driver="docker")
	I1108 10:38:29.502274 1239783 client.go:173] LocalClient.Create starting
	I1108 10:38:29.502386 1239783 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem
	I1108 10:38:29.502420 1239783 main.go:143] libmachine: Decoding PEM data...
	I1108 10:38:29.502434 1239783 main.go:143] libmachine: Parsing certificate...
	I1108 10:38:29.502493 1239783 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem
	I1108 10:38:29.502513 1239783 main.go:143] libmachine: Decoding PEM data...
	I1108 10:38:29.502523 1239783 main.go:143] libmachine: Parsing certificate...
	I1108 10:38:29.502867 1239783 cli_runner.go:164] Run: docker network inspect auto-731120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 10:38:29.530276 1239783 cli_runner.go:211] docker network inspect auto-731120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 10:38:29.530361 1239783 network_create.go:284] running [docker network inspect auto-731120] to gather additional debugging logs...
	I1108 10:38:29.530384 1239783 cli_runner.go:164] Run: docker network inspect auto-731120
	W1108 10:38:29.554073 1239783 cli_runner.go:211] docker network inspect auto-731120 returned with exit code 1
	I1108 10:38:29.554106 1239783 network_create.go:287] error running [docker network inspect auto-731120]: docker network inspect auto-731120: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-731120 not found
	I1108 10:38:29.554121 1239783 network_create.go:289] output of [docker network inspect auto-731120]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-731120 not found
	
	** /stderr **
	I1108 10:38:29.554230 1239783 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:38:29.574380 1239783 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f127b1978c3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:c7:37:65:8c:96} reservation:<nil>}
	I1108 10:38:29.574716 1239783 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b98bf73d2e94 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:99:be:46:ea:86} reservation:<nil>}
	I1108 10:38:29.575029 1239783 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c4df73992be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e6:ad:c1:c0:ea:6d} reservation:<nil>}
	I1108 10:38:29.575486 1239783 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f8690}
	I1108 10:38:29.575509 1239783 network_create.go:124] attempt to create docker network auto-731120 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 10:38:29.575566 1239783 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-731120 auto-731120
	I1108 10:38:29.639863 1239783 network_create.go:108] docker network auto-731120 192.168.76.0/24 created
	I1108 10:38:29.639902 1239783 kic.go:121] calculated static IP "192.168.76.2" for the "auto-731120" container
	I1108 10:38:29.639970 1239783 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 10:38:29.665631 1239783 cli_runner.go:164] Run: docker volume create auto-731120 --label name.minikube.sigs.k8s.io=auto-731120 --label created_by.minikube.sigs.k8s.io=true
	I1108 10:38:29.685143 1239783 oci.go:103] Successfully created a docker volume auto-731120
	I1108 10:38:29.685225 1239783 cli_runner.go:164] Run: docker run --rm --name auto-731120-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-731120 --entrypoint /usr/bin/test -v auto-731120:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 10:38:30.557198 1239783 oci.go:107] Successfully prepared a docker volume auto-731120
	I1108 10:38:30.557244 1239783 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:30.557263 1239783 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 10:38:30.557326 1239783 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-731120:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 10:38:34.406702 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:36.407908 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:38.411812 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	I1108 10:38:35.911607 1239783 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-731120:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (5.354241102s)
	I1108 10:38:35.911638 1239783 kic.go:203] duration metric: took 5.354371011s to extract preloaded images to volume ...
	W1108 10:38:35.911777 1239783 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1108 10:38:35.911887 1239783 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 10:38:36.016820 1239783 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-731120 --name auto-731120 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-731120 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-731120 --network auto-731120 --ip 192.168.76.2 --volume auto-731120:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 10:38:36.502546 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Running}}
	I1108 10:38:36.527187 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:38:36.558696 1239783 cli_runner.go:164] Run: docker exec auto-731120 stat /var/lib/dpkg/alternatives/iptables
	I1108 10:38:36.632342 1239783 oci.go:144] the created container "auto-731120" has a running status.
	I1108 10:38:36.632368 1239783 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa...
	I1108 10:38:37.358297 1239783 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 10:38:37.393702 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:38:37.428486 1239783 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 10:38:37.428504 1239783 kic_runner.go:114] Args: [docker exec --privileged auto-731120 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 10:38:37.492553 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:38:37.512828 1239783 machine.go:94] provisionDockerMachine start ...
	I1108 10:38:37.512919 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:37.533034 1239783 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:37.533360 1239783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1108 10:38:37.533376 1239783 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 10:38:37.534000 1239783 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41570->127.0.0.1:34557: read: connection reset by peer
	W1108 10:38:40.909662 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:43.411245 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	I1108 10:38:40.683878 1239783 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-731120
	
	I1108 10:38:40.683901 1239783 ubuntu.go:182] provisioning hostname "auto-731120"
	I1108 10:38:40.683969 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:40.701351 1239783 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:40.701665 1239783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1108 10:38:40.701681 1239783 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-731120 && echo "auto-731120" | sudo tee /etc/hostname
	I1108 10:38:40.865211 1239783 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-731120
	
	I1108 10:38:40.865289 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:40.882980 1239783 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:40.883285 1239783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1108 10:38:40.883305 1239783 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-731120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-731120/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-731120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 10:38:41.036652 1239783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 10:38:41.036679 1239783 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-1027379/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-1027379/.minikube}
	I1108 10:38:41.036710 1239783 ubuntu.go:190] setting up certificates
	I1108 10:38:41.036728 1239783 provision.go:84] configureAuth start
	I1108 10:38:41.036791 1239783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-731120
	I1108 10:38:41.054020 1239783 provision.go:143] copyHostCerts
	I1108 10:38:41.054093 1239783 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem, removing ...
	I1108 10:38:41.054108 1239783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem
	I1108 10:38:41.054193 1239783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/key.pem (1675 bytes)
	I1108 10:38:41.054318 1239783 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem, removing ...
	I1108 10:38:41.054330 1239783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem
	I1108 10:38:41.054362 1239783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.pem (1078 bytes)
	I1108 10:38:41.054424 1239783 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem, removing ...
	I1108 10:38:41.054433 1239783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem
	I1108 10:38:41.054458 1239783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-1027379/.minikube/cert.pem (1123 bytes)
	I1108 10:38:41.054537 1239783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem org=jenkins.auto-731120 san=[127.0.0.1 192.168.76.2 auto-731120 localhost minikube]
	I1108 10:38:42.045745 1239783 provision.go:177] copyRemoteCerts
	I1108 10:38:42.045819 1239783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 10:38:42.045862 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.065339 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:38:42.204125 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 10:38:42.247789 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1108 10:38:42.272630 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 10:38:42.297222 1239783 provision.go:87] duration metric: took 1.260474925s to configureAuth
	I1108 10:38:42.297263 1239783 ubuntu.go:206] setting minikube options for container-runtime
	I1108 10:38:42.297474 1239783 config.go:182] Loaded profile config "auto-731120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:38:42.297614 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.317888 1239783 main.go:143] libmachine: Using SSH client type: native
	I1108 10:38:42.318314 1239783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1108 10:38:42.318332 1239783 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 10:38:42.614229 1239783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 10:38:42.614254 1239783 machine.go:97] duration metric: took 5.101407316s to provisionDockerMachine
	I1108 10:38:42.614264 1239783 client.go:176] duration metric: took 13.111983873s to LocalClient.Create
	I1108 10:38:42.614279 1239783 start.go:167] duration metric: took 13.112041783s to libmachine.API.Create "auto-731120"
	I1108 10:38:42.614286 1239783 start.go:293] postStartSetup for "auto-731120" (driver="docker")
	I1108 10:38:42.614296 1239783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 10:38:42.614371 1239783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 10:38:42.614421 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.643704 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:38:42.754757 1239783 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 10:38:42.759110 1239783 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 10:38:42.759137 1239783 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 10:38:42.759149 1239783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/addons for local assets ...
	I1108 10:38:42.759210 1239783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-1027379/.minikube/files for local assets ...
	I1108 10:38:42.759314 1239783 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem -> 10292342.pem in /etc/ssl/certs
	I1108 10:38:42.759436 1239783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 10:38:42.769771 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:42.787544 1239783 start.go:296] duration metric: took 173.241642ms for postStartSetup
	I1108 10:38:42.787928 1239783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-731120
	I1108 10:38:42.805510 1239783 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/config.json ...
	I1108 10:38:42.805795 1239783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:38:42.805849 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.822948 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:38:42.927147 1239783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 10:38:42.931748 1239783 start.go:128] duration metric: took 13.433551764s to createHost
	I1108 10:38:42.931778 1239783 start.go:83] releasing machines lock for "auto-731120", held for 13.433668322s
	I1108 10:38:42.931847 1239783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-731120
	I1108 10:38:42.948962 1239783 ssh_runner.go:195] Run: cat /version.json
	I1108 10:38:42.948996 1239783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 10:38:42.949017 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.949057 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:38:42.970812 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:38:42.976871 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:38:43.182187 1239783 ssh_runner.go:195] Run: systemctl --version
	I1108 10:38:43.188954 1239783 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 10:38:43.230103 1239783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 10:38:43.235439 1239783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 10:38:43.235508 1239783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 10:38:43.266356 1239783 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1108 10:38:43.266440 1239783 start.go:496] detecting cgroup driver to use...
	I1108 10:38:43.266561 1239783 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1108 10:38:43.266653 1239783 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 10:38:43.284845 1239783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 10:38:43.297604 1239783 docker.go:218] disabling cri-docker service (if available) ...
	I1108 10:38:43.297670 1239783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 10:38:43.316530 1239783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 10:38:43.336976 1239783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 10:38:43.466078 1239783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 10:38:43.600064 1239783 docker.go:234] disabling docker service ...
	I1108 10:38:43.600150 1239783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 10:38:43.624592 1239783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 10:38:43.638711 1239783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 10:38:43.759820 1239783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 10:38:43.880328 1239783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 10:38:43.893437 1239783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 10:38:43.919882 1239783 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 10:38:43.920011 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.930319 1239783 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 10:38:43.930437 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.939963 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.949168 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.961469 1239783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 10:38:43.970451 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.979979 1239783 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:43.993756 1239783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 10:38:44.005740 1239783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 10:38:44.015679 1239783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 10:38:44.024121 1239783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:44.157504 1239783 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 10:38:44.299251 1239783 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 10:38:44.299343 1239783 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 10:38:44.303393 1239783 start.go:564] Will wait 60s for crictl version
	I1108 10:38:44.303486 1239783 ssh_runner.go:195] Run: which crictl
	I1108 10:38:44.307319 1239783 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 10:38:44.334541 1239783 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 10:38:44.334666 1239783 ssh_runner.go:195] Run: crio --version
	I1108 10:38:44.364774 1239783 ssh_runner.go:195] Run: crio --version
	I1108 10:38:44.394846 1239783 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 10:38:44.397798 1239783 cli_runner.go:164] Run: docker network inspect auto-731120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 10:38:44.415922 1239783 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 10:38:44.419628 1239783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:44.431226 1239783 kubeadm.go:884] updating cluster {Name:auto-731120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-731120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 10:38:44.431333 1239783 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 10:38:44.431389 1239783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:38:44.463899 1239783 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:38:44.463925 1239783 crio.go:433] Images already preloaded, skipping extraction
	I1108 10:38:44.463978 1239783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 10:38:44.492050 1239783 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 10:38:44.492079 1239783 cache_images.go:86] Images are preloaded, skipping loading
	I1108 10:38:44.492087 1239783 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 10:38:44.492174 1239783 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-731120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-731120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 10:38:44.492259 1239783 ssh_runner.go:195] Run: crio config
	I1108 10:38:44.553518 1239783 cni.go:84] Creating CNI manager for ""
	I1108 10:38:44.553554 1239783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:38:44.553572 1239783 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 10:38:44.553596 1239783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-731120 NodeName:auto-731120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 10:38:44.553741 1239783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-731120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 10:38:44.553828 1239783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 10:38:44.562644 1239783 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 10:38:44.562737 1239783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 10:38:44.570477 1239783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1108 10:38:44.583435 1239783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 10:38:44.596910 1239783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1108 10:38:44.609738 1239783 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 10:38:44.613384 1239783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 10:38:44.623370 1239783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:38:44.742168 1239783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:38:44.764734 1239783 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120 for IP: 192.168.76.2
	I1108 10:38:44.764757 1239783 certs.go:195] generating shared ca certs ...
	I1108 10:38:44.764774 1239783 certs.go:227] acquiring lock for ca certs: {Name:mk74c0be983b11bd587e3ba953be358e5004f112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:44.764980 1239783 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key
	I1108 10:38:44.765042 1239783 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key
	I1108 10:38:44.765054 1239783 certs.go:257] generating profile certs ...
	I1108 10:38:44.765125 1239783 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.key
	I1108 10:38:44.765145 1239783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt with IP's: []
	I1108 10:38:45.390556 1239783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt ...
	I1108 10:38:45.390586 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt: {Name:mk746bfa66833d669f1861e9ec5e0248f18da719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:45.390937 1239783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.key ...
	I1108 10:38:45.390956 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.key: {Name:mka2b36c93d2d4ba6913e777534652f1fb328644 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:45.391154 1239783 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key.5766770b
	I1108 10:38:45.391176 1239783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt.5766770b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 10:38:45.830250 1239783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt.5766770b ...
	I1108 10:38:45.830279 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt.5766770b: {Name:mk2bffa3e0b315142911c7c424a2d666b8827f84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:45.830468 1239783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key.5766770b ...
	I1108 10:38:45.830481 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key.5766770b: {Name:mk19c6ef73a4835ab27470811610f495176dd59b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:45.830569 1239783 certs.go:382] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt.5766770b -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt
	I1108 10:38:45.830658 1239783 certs.go:386] copying /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key.5766770b -> /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key
	I1108 10:38:45.830722 1239783 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.key
	I1108 10:38:45.830740 1239783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.crt with IP's: []
	I1108 10:38:46.647689 1239783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.crt ...
	I1108 10:38:46.647723 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.crt: {Name:mk7d0f3f3799428da70d615ebae19f4feee14096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:46.647919 1239783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.key ...
	I1108 10:38:46.647930 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.key: {Name:mk710f6012005d54fc176370624632be88a68964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:38:46.648114 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem (1338 bytes)
	W1108 10:38:46.648157 1239783 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234_empty.pem, impossibly tiny 0 bytes
	I1108 10:38:46.648166 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 10:38:46.648193 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/ca.pem (1078 bytes)
	I1108 10:38:46.648223 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/cert.pem (1123 bytes)
	I1108 10:38:46.648248 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/key.pem (1675 bytes)
	I1108 10:38:46.648294 1239783 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem (1708 bytes)
	I1108 10:38:46.648926 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 10:38:46.670981 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 10:38:46.691272 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 10:38:46.709536 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1108 10:38:46.727738 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1108 10:38:46.746464 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 10:38:46.764400 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 10:38:46.781764 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 10:38:46.801433 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 10:38:46.820193 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/certs/1029234.pem --> /usr/share/ca-certificates/1029234.pem (1338 bytes)
	I1108 10:38:46.837682 1239783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/ssl/certs/10292342.pem --> /usr/share/ca-certificates/10292342.pem (1708 bytes)
	I1108 10:38:46.861922 1239783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 10:38:46.875167 1239783 ssh_runner.go:195] Run: openssl version
	I1108 10:38:46.881502 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1029234.pem && ln -fs /usr/share/ca-certificates/1029234.pem /etc/ssl/certs/1029234.pem"
	I1108 10:38:46.890165 1239783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1029234.pem
	I1108 10:38:46.896784 1239783 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:40 /usr/share/ca-certificates/1029234.pem
	I1108 10:38:46.896857 1239783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1029234.pem
	I1108 10:38:46.944345 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1029234.pem /etc/ssl/certs/51391683.0"
	I1108 10:38:46.959219 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10292342.pem && ln -fs /usr/share/ca-certificates/10292342.pem /etc/ssl/certs/10292342.pem"
	I1108 10:38:46.968013 1239783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10292342.pem
	I1108 10:38:46.971797 1239783 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:40 /usr/share/ca-certificates/10292342.pem
	I1108 10:38:46.971859 1239783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10292342.pem
	I1108 10:38:47.013417 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10292342.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 10:38:47.022286 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 10:38:47.031384 1239783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:47.035194 1239783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:34 /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:47.035267 1239783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 10:38:47.077139 1239783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 10:38:47.085476 1239783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 10:38:47.092926 1239783 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 10:38:47.092987 1239783 kubeadm.go:401] StartCluster: {Name:auto-731120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-731120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 10:38:47.093065 1239783 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 10:38:47.093128 1239783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 10:38:47.126995 1239783 cri.go:89] found id: ""
	I1108 10:38:47.127080 1239783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 10:38:47.141543 1239783 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 10:38:47.151022 1239783 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 10:38:47.151091 1239783 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 10:38:47.166481 1239783 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 10:38:47.166503 1239783 kubeadm.go:158] found existing configuration files:
	
	I1108 10:38:47.166560 1239783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 10:38:47.174269 1239783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 10:38:47.174343 1239783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 10:38:47.182226 1239783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 10:38:47.190257 1239783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 10:38:47.190321 1239783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 10:38:47.197673 1239783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 10:38:47.205350 1239783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 10:38:47.205418 1239783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 10:38:47.213413 1239783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 10:38:47.221531 1239783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 10:38:47.221617 1239783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 10:38:47.229359 1239783 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 10:38:47.271780 1239783 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 10:38:47.272104 1239783 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 10:38:47.295144 1239783 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 10:38:47.295223 1239783 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1108 10:38:47.295266 1239783 kubeadm.go:319] OS: Linux
	I1108 10:38:47.295317 1239783 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 10:38:47.295373 1239783 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1108 10:38:47.295426 1239783 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 10:38:47.295480 1239783 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 10:38:47.295534 1239783 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 10:38:47.295588 1239783 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 10:38:47.295639 1239783 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 10:38:47.295692 1239783 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 10:38:47.295744 1239783 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1108 10:38:47.374593 1239783 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 10:38:47.374725 1239783 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 10:38:47.374836 1239783 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 10:38:47.382990 1239783 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1108 10:38:45.921295 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:48.407724 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	I1108 10:38:47.389126 1239783 out.go:252]   - Generating certificates and keys ...
	I1108 10:38:47.389273 1239783 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 10:38:47.389393 1239783 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 10:38:47.622865 1239783 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 10:38:47.815472 1239783 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 10:38:48.056313 1239783 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 10:38:48.721769 1239783 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1108 10:38:50.408796 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	W1108 10:38:52.922218 1235505 pod_ready.go:104] pod "coredns-66bc5c9577-nvtlg" is not "Ready", error: <nil>
	I1108 10:38:49.198183 1239783 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 10:38:49.198450 1239783 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-731120 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:38:49.625642 1239783 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 10:38:49.626031 1239783 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-731120 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 10:38:50.411992 1239783 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 10:38:51.073598 1239783 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 10:38:51.148481 1239783 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 10:38:51.149075 1239783 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 10:38:51.488824 1239783 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 10:38:53.193718 1239783 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 10:38:53.536315 1239783 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 10:38:54.482269 1239783 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 10:38:54.818390 1239783 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 10:38:54.819114 1239783 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 10:38:54.822251 1239783 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 10:38:54.910595 1235505 pod_ready.go:94] pod "coredns-66bc5c9577-nvtlg" is "Ready"
	I1108 10:38:54.910633 1235505 pod_ready.go:86] duration metric: took 32.008905809s for pod "coredns-66bc5c9577-nvtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:54.914461 1235505 pod_ready.go:83] waiting for pod "etcd-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:54.921754 1235505 pod_ready.go:94] pod "etcd-no-preload-291044" is "Ready"
	I1108 10:38:54.921784 1235505 pod_ready.go:86] duration metric: took 7.292889ms for pod "etcd-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:54.924622 1235505 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:54.930277 1235505 pod_ready.go:94] pod "kube-apiserver-no-preload-291044" is "Ready"
	I1108 10:38:54.930312 1235505 pod_ready.go:86] duration metric: took 5.659846ms for pod "kube-apiserver-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:54.935021 1235505 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:55.107171 1235505 pod_ready.go:94] pod "kube-controller-manager-no-preload-291044" is "Ready"
	I1108 10:38:55.107202 1235505 pod_ready.go:86] duration metric: took 172.154181ms for pod "kube-controller-manager-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:55.306165 1235505 pod_ready.go:83] waiting for pod "kube-proxy-2m8tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:55.706872 1235505 pod_ready.go:94] pod "kube-proxy-2m8tx" is "Ready"
	I1108 10:38:55.706898 1235505 pod_ready.go:86] duration metric: took 400.659794ms for pod "kube-proxy-2m8tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:55.908353 1235505 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:56.305397 1235505 pod_ready.go:94] pod "kube-scheduler-no-preload-291044" is "Ready"
	I1108 10:38:56.305421 1235505 pod_ready.go:86] duration metric: took 397.042761ms for pod "kube-scheduler-no-preload-291044" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 10:38:56.305434 1235505 pod_ready.go:40] duration metric: took 33.407684709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 10:38:56.372731 1235505 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1108 10:38:56.376058 1235505 out.go:179] * Done! kubectl is now configured to use "no-preload-291044" cluster and "default" namespace by default
	I1108 10:38:54.825543 1239783 out.go:252]   - Booting up control plane ...
	I1108 10:38:54.825666 1239783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 10:38:54.825753 1239783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 10:38:54.827493 1239783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 10:38:54.844923 1239783 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 10:38:54.845229 1239783 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 10:38:54.855684 1239783 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 10:38:54.855789 1239783 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 10:38:54.855840 1239783 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 10:38:55.035134 1239783 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 10:38:55.035281 1239783 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 10:38:56.540776 1239783 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501756872s
	I1108 10:38:56.540898 1239783 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 10:38:56.540984 1239783 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 10:38:56.541077 1239783 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 10:38:56.541159 1239783 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 10:38:59.247935 1239783 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.707495775s
	I1108 10:39:03.442533 1239783 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.902469886s
	I1108 10:39:04.043377 1239783 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.50312217s
	I1108 10:39:04.063374 1239783 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 10:39:04.081656 1239783 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 10:39:04.096957 1239783 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 10:39:04.097158 1239783 kubeadm.go:319] [mark-control-plane] Marking the node auto-731120 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 10:39:04.109688 1239783 kubeadm.go:319] [bootstrap-token] Using token: c9kz4p.88phkg0unv6h2q55
	I1108 10:39:04.112569 1239783 out.go:252]   - Configuring RBAC rules ...
	I1108 10:39:04.112698 1239783 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 10:39:04.117482 1239783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 10:39:04.127385 1239783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 10:39:04.131444 1239783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 10:39:04.137923 1239783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 10:39:04.143268 1239783 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 10:39:04.455025 1239783 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 10:39:04.888412 1239783 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 10:39:05.450963 1239783 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 10:39:05.452143 1239783 kubeadm.go:319] 
	I1108 10:39:05.452223 1239783 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 10:39:05.452230 1239783 kubeadm.go:319] 
	I1108 10:39:05.452306 1239783 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 10:39:05.452311 1239783 kubeadm.go:319] 
	I1108 10:39:05.452336 1239783 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 10:39:05.452395 1239783 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 10:39:05.452494 1239783 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 10:39:05.452501 1239783 kubeadm.go:319] 
	I1108 10:39:05.452563 1239783 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 10:39:05.452575 1239783 kubeadm.go:319] 
	I1108 10:39:05.452622 1239783 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 10:39:05.452626 1239783 kubeadm.go:319] 
	I1108 10:39:05.452676 1239783 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 10:39:05.452750 1239783 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 10:39:05.452817 1239783 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 10:39:05.452821 1239783 kubeadm.go:319] 
	I1108 10:39:05.452904 1239783 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 10:39:05.452979 1239783 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 10:39:05.452983 1239783 kubeadm.go:319] 
	I1108 10:39:05.453066 1239783 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c9kz4p.88phkg0unv6h2q55 \
	I1108 10:39:05.453167 1239783 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 \
	I1108 10:39:05.453187 1239783 kubeadm.go:319] 	--control-plane 
	I1108 10:39:05.453192 1239783 kubeadm.go:319] 
	I1108 10:39:05.453275 1239783 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 10:39:05.453279 1239783 kubeadm.go:319] 
	I1108 10:39:05.453359 1239783 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c9kz4p.88phkg0unv6h2q55 \
	I1108 10:39:05.453461 1239783 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f5582bc97549ba8bf6397140298181cbdaa69395c739f2198fb8727d27ba5c8 
	I1108 10:39:05.458462 1239783 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1108 10:39:05.458706 1239783 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1108 10:39:05.458810 1239783 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 10:39:05.458828 1239783 cni.go:84] Creating CNI manager for ""
	I1108 10:39:05.458835 1239783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 10:39:05.461961 1239783 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 10:39:05.464978 1239783 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 10:39:05.470625 1239783 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 10:39:05.470651 1239783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 10:39:05.486663 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 10:39:06.226374 1239783 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 10:39:06.226506 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:06.226600 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-731120 minikube.k8s.io/updated_at=2025_11_08T10_39_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=auto-731120 minikube.k8s.io/primary=true
	I1108 10:39:06.411461 1239783 ops.go:34] apiserver oom_adj: -16
	I1108 10:39:06.411607 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:06.911740 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:07.411669 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:07.912598 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:08.411896 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:08.912228 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:09.411649 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:09.912196 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:10.411680 1239783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 10:39:10.613371 1239783 kubeadm.go:1114] duration metric: took 4.386909583s to wait for elevateKubeSystemPrivileges
	I1108 10:39:10.613402 1239783 kubeadm.go:403] duration metric: took 23.520417897s to StartCluster
	I1108 10:39:10.613420 1239783 settings.go:142] acquiring lock: {Name:mk789fd3d270b8659b4b2f696053a66ce21498d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:39:10.613476 1239783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:39:10.614484 1239783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-1027379/kubeconfig: {Name:mk5c1e41974db351e10c7d3c71ac50e2578d6fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 10:39:10.614695 1239783 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 10:39:10.614803 1239783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 10:39:10.615049 1239783 config.go:182] Loaded profile config "auto-731120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:39:10.615079 1239783 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 10:39:10.615137 1239783 addons.go:70] Setting storage-provisioner=true in profile "auto-731120"
	I1108 10:39:10.615151 1239783 addons.go:239] Setting addon storage-provisioner=true in "auto-731120"
	I1108 10:39:10.615172 1239783 host.go:66] Checking if "auto-731120" exists ...
	I1108 10:39:10.615889 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:39:10.616125 1239783 addons.go:70] Setting default-storageclass=true in profile "auto-731120"
	I1108 10:39:10.616162 1239783 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-731120"
	I1108 10:39:10.616473 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:39:10.620559 1239783 out.go:179] * Verifying Kubernetes components...
	I1108 10:39:10.634395 1239783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 10:39:10.659830 1239783 addons.go:239] Setting addon default-storageclass=true in "auto-731120"
	I1108 10:39:10.659875 1239783 host.go:66] Checking if "auto-731120" exists ...
	I1108 10:39:10.660327 1239783 cli_runner.go:164] Run: docker container inspect auto-731120 --format={{.State.Status}}
	I1108 10:39:10.680085 1239783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 10:39:10.683560 1239783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:39:10.683609 1239783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 10:39:10.683673 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:39:10.712306 1239783 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 10:39:10.712332 1239783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 10:39:10.712394 1239783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-731120
	I1108 10:39:10.731853 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:39:10.748735 1239783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/auto-731120/id_rsa Username:docker}
	I1108 10:39:11.023598 1239783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 10:39:11.063113 1239783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 10:39:11.102223 1239783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 10:39:11.102330 1239783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 10:39:12.462741 1239783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.439110753s)
	I1108 10:39:12.462790 1239783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.399657843s)
	I1108 10:39:12.463127 1239783 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.360783547s)
	I1108 10:39:12.463966 1239783 node_ready.go:35] waiting up to 15m0s for node "auto-731120" to be "Ready" ...
	I1108 10:39:12.464201 1239783 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.361955107s)
	I1108 10:39:12.464218 1239783 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 10:39:12.522691 1239783 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 10:39:12.524712 1239783 addons.go:515] duration metric: took 1.909605803s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 10:39:12.969383 1239783 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-731120" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Nov 08 10:38:49 no-preload-291044 crio[651]: time="2025-11-08T10:38:49.815590692Z" level=info msg="Removed container 336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk/dashboard-metrics-scraper" id=bd8630be-163e-4538-b92e-7debf78dca15 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 10:38:52 no-preload-291044 conmon[1134]: conmon d1dbd6cc1f1dc5794d5f <ninfo>: container 1137 exited with status 1
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.808188556Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d18eb3e9-f68f-40ae-a4fa-313bf0f281d0 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.809639509Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1d3e3876-bf7d-46c1-a038-fcc3d0306751 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.812131447Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=79a74312-d2a9-4a60-bf49-1566930ccb06 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.812225082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.819914332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.820890131Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/14f5f7d0215035c3e8346a2ea360ddfc58fdad0b1f748cf6426083e1955fbe37/merged/etc/passwd: no such file or directory"
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.821001709Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/14f5f7d0215035c3e8346a2ea360ddfc58fdad0b1f748cf6426083e1955fbe37/merged/etc/group: no such file or directory"
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.821760817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.851822438Z" level=info msg="Created container 65cbe2bb9985bf3d82c006541771b098511632bf16f3207681bdffd6065d3a5a: kube-system/storage-provisioner/storage-provisioner" id=79a74312-d2a9-4a60-bf49-1566930ccb06 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.857441078Z" level=info msg="Starting container: 65cbe2bb9985bf3d82c006541771b098511632bf16f3207681bdffd6065d3a5a" id=3e033464-6f33-4bc5-b9c3-476f66f731d0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 10:38:52 no-preload-291044 crio[651]: time="2025-11-08T10:38:52.860973167Z" level=info msg="Started container" PID=1621 containerID=65cbe2bb9985bf3d82c006541771b098511632bf16f3207681bdffd6065d3a5a description=kube-system/storage-provisioner/storage-provisioner id=3e033464-6f33-4bc5-b9c3-476f66f731d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2bc67ae4df7ebb08d374e7a97a00aac70c4a9b06878f40b7c9d342d24e127b8
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.617223398Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.624956488Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.625120339Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.625208156Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.63055204Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.630707284Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.630788677Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.63414514Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.634304906Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.634391402Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.640698506Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 10:39:02 no-preload-291044 crio[651]: time="2025-11-08T10:39:02.640853201Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	65cbe2bb9985b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago       Running             storage-provisioner         2                   c2bc67ae4df7e       storage-provisioner                          kube-system
	9ae590763b2f2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago       Exited              dashboard-metrics-scraper   2                   2236741355d59       dashboard-metrics-scraper-6ffb444bf9-h2xvk   kubernetes-dashboard
	6c6e800b9b138       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   69067acda16ce       kubernetes-dashboard-855c9754f9-rttff        kubernetes-dashboard
	6c5c96793404d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   0d37978905194       busybox                                      default
	c33eeb214e958       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   64c93bd624185       coredns-66bc5c9577-nvtlg                     kube-system
	ab334d5bd7ba7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   436584c46788a       kindnet-nct2b                                kube-system
	d1dbd6cc1f1dc       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago       Exited              storage-provisioner         1                   c2bc67ae4df7e       storage-provisioner                          kube-system
	5b59e1565e30b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   88e7c62316202       kube-proxy-2m8tx                             kube-system
	5ff011c39fa1a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   e8b6237022914       kube-controller-manager-no-preload-291044    kube-system
	fef0c37718a66       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   d6d862e4720f1       kube-scheduler-no-preload-291044             kube-system
	99b5f6a837326       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   92531495aa672       kube-apiserver-no-preload-291044             kube-system
	daf6ee479a7ca       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2360825bca4c3       etcd-no-preload-291044                       kube-system
	
	
	==> coredns [c33eeb214e958294220dbe340086eab0da97ee59bafe81bc2bc509133f4b77b0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54796 - 52919 "HINFO IN 8601295041522844400.5362363667463070056. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013066141s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-291044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-291044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=no-preload-291044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T10_37_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 10:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-291044
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 10:39:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 10:38:51 +0000   Sat, 08 Nov 2025 10:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 10:38:51 +0000   Sat, 08 Nov 2025 10:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 10:38:51 +0000   Sat, 08 Nov 2025 10:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 10:38:51 +0000   Sat, 08 Nov 2025 10:37:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-291044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                53ced70c-1627-4fc9-9eaa-b752fd9e6d98
	  Boot ID:                    19ef8260-c0ed-42db-9d81-c50b82271afb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-nvtlg                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     117s
	  kube-system                 etcd-no-preload-291044                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-nct2b                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      118s
	  kube-system                 kube-apiserver-no-preload-291044              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-no-preload-291044     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-2m8tx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-no-preload-291044              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-h2xvk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rttff         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 116s                   kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Warning  CgroupV1                 2m12s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node no-preload-291044 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node no-preload-291044 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node no-preload-291044 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m2s                   kubelet          Node no-preload-291044 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m2s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m2s                   kubelet          Node no-preload-291044 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m2s                   kubelet          Node no-preload-291044 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           118s                   node-controller  Node no-preload-291044 event: Registered Node no-preload-291044 in Controller
	  Normal   NodeReady                101s                   kubelet          Node no-preload-291044 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node no-preload-291044 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node no-preload-291044 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node no-preload-291044 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                    node-controller  Node no-preload-291044 event: Registered Node no-preload-291044 in Controller
	
	
	==> dmesg <==
	[Nov 8 10:18] overlayfs: idmapped layers are currently not supported
	[ +23.374699] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:19] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:20] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:22] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +27.610850] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:24] overlayfs: idmapped layers are currently not supported
	[ +32.982934] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:26] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:29] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:30] overlayfs: idmapped layers are currently not supported
	[  +6.924930] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:36] overlayfs: idmapped layers are currently not supported
	[ +30.788294] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 8 10:38] overlayfs: idmapped layers are currently not supported
	[  +6.100629] overlayfs: idmapped layers are currently not supported
	[ +43.651730] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [daf6ee479a7cae60eb0974a556bff3ab215747a99f91f962708a80a61d9ba6f5] <==
	{"level":"warn","ts":"2025-11-08T10:38:18.254800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.271117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.312905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.381677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.439127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.482882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.496504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.541922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.650007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.751182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.809517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.840402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.855201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.892183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.937836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:18.980259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.034171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.073335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.131930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.176764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.259724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.285903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.322035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.346089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T10:38:19.439074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:39:15 up  9:21,  0 user,  load average: 6.13, 4.73, 3.54
	Linux no-preload-291044 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ab334d5bd7ba72aea7af822ddc9751317c502a94c09a2740bcae5d2371922e43] <==
	I1108 10:38:22.330304       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 10:38:22.330527       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 10:38:22.331159       1 main.go:148] setting mtu 1500 for CNI 
	I1108 10:38:22.331175       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 10:38:22.331186       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T10:38:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 10:38:22.615770       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 10:38:22.615794       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 10:38:22.615803       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 10:38:22.616580       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 10:38:52.616343       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 10:38:52.616413       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 10:38:52.616560       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 10:38:52.616639       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 10:38:53.716758       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 10:38:53.716886       1 metrics.go:72] Registering metrics
	I1108 10:38:53.716993       1 controller.go:711] "Syncing nftables rules"
	I1108 10:39:02.616218       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:39:02.616304       1 main.go:301] handling current node
	I1108 10:39:12.616034       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 10:39:12.616062       1 main.go:301] handling current node
	
	
	==> kube-apiserver [99b5f6a8373260a1fb2a88d8f9ff8805d70fb0e4e09b4e2bea1c955d090e83a3] <==
	I1108 10:38:20.615273       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 10:38:20.635659       1 cache.go:39] Caches are synced for autoregister controller
	E1108 10:38:20.644136       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 10:38:20.661955       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 10:38:20.673502       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 10:38:20.673529       1 policy_source.go:240] refreshing policies
	I1108 10:38:20.687124       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 10:38:20.703620       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 10:38:20.717470       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 10:38:20.717622       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 10:38:20.728284       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 10:38:20.721316       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 10:38:20.721330       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 10:38:20.733080       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 10:38:21.328838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 10:38:21.426396       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 10:38:21.451690       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 10:38:21.649989       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 10:38:21.826210       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 10:38:21.941874       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 10:38:22.379799       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.59.26"}
	I1108 10:38:22.429119       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.201.122"}
	I1108 10:38:25.128119       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 10:38:25.426882       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 10:38:25.608302       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5ff011c39fa1a4e6ccf1602407612d6fd09adb5c8853548d45cbc57693896266] <==
	I1108 10:38:25.004333       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 10:38:25.007823       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:38:25.010149       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 10:38:25.016685       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 10:38:25.017277       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 10:38:25.018359       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 10:38:25.018453       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 10:38:25.019756       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 10:38:25.019881       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 10:38:25.021222       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 10:38:25.022364       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 10:38:25.026902       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 10:38:25.028067       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 10:38:25.029335       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 10:38:25.037719       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 10:38:25.037856       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 10:38:25.037945       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:38:25.037995       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 10:38:25.038057       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 10:38:25.038086       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 10:38:25.041377       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 10:38:25.041412       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 10:38:25.041419       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 10:38:25.048920       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 10:38:25.054908       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [5b59e1565e30b8151a42f654301131ca5a9b85a2c6f83767a903111bd6f7c44b] <==
	I1108 10:38:22.563252       1 server_linux.go:53] "Using iptables proxy"
	I1108 10:38:22.806480       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 10:38:22.906643       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 10:38:22.906703       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 10:38:22.906836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 10:38:22.935640       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 10:38:22.935695       1 server_linux.go:132] "Using iptables Proxier"
	I1108 10:38:22.942328       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 10:38:22.942614       1 server.go:527] "Version info" version="v1.34.1"
	I1108 10:38:22.942639       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:38:22.948589       1 config.go:200] "Starting service config controller"
	I1108 10:38:22.948614       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 10:38:22.948632       1 config.go:106] "Starting endpoint slice config controller"
	I1108 10:38:22.948636       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 10:38:22.948648       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 10:38:22.948652       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 10:38:22.949252       1 config.go:309] "Starting node config controller"
	I1108 10:38:22.949269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 10:38:22.949275       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 10:38:23.048753       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 10:38:23.048791       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 10:38:23.048859       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fef0c37718a669a3a308b4a0ee7aa3629f5c411a3f86070c7497fead7a730494] <==
	I1108 10:38:18.546463       1 serving.go:386] Generated self-signed cert in-memory
	W1108 10:38:20.428835       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 10:38:20.429773       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 10:38:20.429846       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 10:38:20.429879       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 10:38:20.626094       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 10:38:20.626128       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 10:38:20.655108       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 10:38:20.655262       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 10:38:20.685853       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:38:20.685884       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 10:38:20.786335       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 10:38:25 no-preload-291044 kubelet[766]: I1108 10:38:25.748466     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r684k\" (UniqueName: \"kubernetes.io/projected/a722ea55-9e8c-4c23-aa7f-ad48c06d67ec-kube-api-access-r684k\") pod \"kubernetes-dashboard-855c9754f9-rttff\" (UID: \"a722ea55-9e8c-4c23-aa7f-ad48c06d67ec\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rttff"
	Nov 08 10:38:25 no-preload-291044 kubelet[766]: I1108 10:38:25.748534     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a722ea55-9e8c-4c23-aa7f-ad48c06d67ec-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rttff\" (UID: \"a722ea55-9e8c-4c23-aa7f-ad48c06d67ec\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rttff"
	Nov 08 10:38:25 no-preload-291044 kubelet[766]: W1108 10:38:25.869694     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/crio-2236741355d59f0eac33adda0ece88f8b2edcb0fe453f29cefe02adec2d7beb6 WatchSource:0}: Error finding container 2236741355d59f0eac33adda0ece88f8b2edcb0fe453f29cefe02adec2d7beb6: Status 404 returned error can't find the container with id 2236741355d59f0eac33adda0ece88f8b2edcb0fe453f29cefe02adec2d7beb6
	Nov 08 10:38:26 no-preload-291044 kubelet[766]: W1108 10:38:26.221424     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4dafcc75ae9d33c4aef9359a8981f009824e2fb72b6ca9992ab288dc1a29ea4a/crio-69067acda16ce370940915e0221591850a8363254affeb8c0a42726323a59089 WatchSource:0}: Error finding container 69067acda16ce370940915e0221591850a8363254affeb8c0a42726323a59089: Status 404 returned error can't find the container with id 69067acda16ce370940915e0221591850a8363254affeb8c0a42726323a59089
	Nov 08 10:38:31 no-preload-291044 kubelet[766]: I1108 10:38:31.716767     766 scope.go:117] "RemoveContainer" containerID="15b033f5b8d903e7496d87ae6cc76ccbfcc3a2882024b2ce2e5eff819d2a1545"
	Nov 08 10:38:32 no-preload-291044 kubelet[766]: I1108 10:38:32.735846     766 scope.go:117] "RemoveContainer" containerID="336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0"
	Nov 08 10:38:32 no-preload-291044 kubelet[766]: E1108 10:38:32.736495     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:38:32 no-preload-291044 kubelet[766]: I1108 10:38:32.738475     766 scope.go:117] "RemoveContainer" containerID="15b033f5b8d903e7496d87ae6cc76ccbfcc3a2882024b2ce2e5eff819d2a1545"
	Nov 08 10:38:33 no-preload-291044 kubelet[766]: I1108 10:38:33.744670     766 scope.go:117] "RemoveContainer" containerID="336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0"
	Nov 08 10:38:33 no-preload-291044 kubelet[766]: E1108 10:38:33.744825     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:38:35 no-preload-291044 kubelet[766]: I1108 10:38:35.847406     766 scope.go:117] "RemoveContainer" containerID="336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0"
	Nov 08 10:38:35 no-preload-291044 kubelet[766]: E1108 10:38:35.847584     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:38:49 no-preload-291044 kubelet[766]: I1108 10:38:49.426711     766 scope.go:117] "RemoveContainer" containerID="336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0"
	Nov 08 10:38:49 no-preload-291044 kubelet[766]: I1108 10:38:49.795927     766 scope.go:117] "RemoveContainer" containerID="336a97e346d6a9426713ffeb581ee77ce75969a3e7082d68d022e59c779ad6e0"
	Nov 08 10:38:49 no-preload-291044 kubelet[766]: I1108 10:38:49.796222     766 scope.go:117] "RemoveContainer" containerID="9ae590763b2f2cda1d610cc1f78b2ea77114a7740040e661837d6264d55fa642"
	Nov 08 10:38:49 no-preload-291044 kubelet[766]: E1108 10:38:49.796371     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:38:49 no-preload-291044 kubelet[766]: I1108 10:38:49.825848     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rttff" podStartSLOduration=12.986437713 podStartE2EDuration="24.825831079s" podCreationTimestamp="2025-11-08 10:38:25 +0000 UTC" firstStartedPulling="2025-11-08 10:38:26.225114336 +0000 UTC m=+14.103300973" lastFinishedPulling="2025-11-08 10:38:38.064507694 +0000 UTC m=+25.942694339" observedRunningTime="2025-11-08 10:38:38.784778817 +0000 UTC m=+26.662965462" watchObservedRunningTime="2025-11-08 10:38:49.825831079 +0000 UTC m=+37.704017724"
	Nov 08 10:38:52 no-preload-291044 kubelet[766]: I1108 10:38:52.807209     766 scope.go:117] "RemoveContainer" containerID="d1dbd6cc1f1dc5794d5f0bdb0bec35359fb7abbfb462e4a28128e68598c92cad"
	Nov 08 10:38:55 no-preload-291044 kubelet[766]: I1108 10:38:55.847616     766 scope.go:117] "RemoveContainer" containerID="9ae590763b2f2cda1d610cc1f78b2ea77114a7740040e661837d6264d55fa642"
	Nov 08 10:38:55 no-preload-291044 kubelet[766]: E1108 10:38:55.847814     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:39:08 no-preload-291044 kubelet[766]: I1108 10:39:08.426475     766 scope.go:117] "RemoveContainer" containerID="9ae590763b2f2cda1d610cc1f78b2ea77114a7740040e661837d6264d55fa642"
	Nov 08 10:39:08 no-preload-291044 kubelet[766]: E1108 10:39:08.427117     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h2xvk_kubernetes-dashboard(23f87f7b-61f8-47f1-ae60-d66bafd556a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h2xvk" podUID="23f87f7b-61f8-47f1-ae60-d66bafd556a6"
	Nov 08 10:39:09 no-preload-291044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 10:39:09 no-preload-291044 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 10:39:09 no-preload-291044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6c6e800b9b138a613ccf880559f7dab5ee4100ad4b76378594c6f7fa68a7d4af] <==
	2025/11/08 10:38:38 Using namespace: kubernetes-dashboard
	2025/11/08 10:38:38 Using in-cluster config to connect to apiserver
	2025/11/08 10:38:38 Using secret token for csrf signing
	2025/11/08 10:38:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 10:38:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 10:38:38 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 10:38:38 Generating JWE encryption key
	2025/11/08 10:38:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 10:38:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 10:38:38 Initializing JWE encryption key from synchronized object
	2025/11/08 10:38:38 Creating in-cluster Sidecar client
	2025/11/08 10:38:38 Serving insecurely on HTTP port: 9090
	2025/11/08 10:38:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:39:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 10:38:38 Starting overwatch
	
	
	==> storage-provisioner [65cbe2bb9985bf3d82c006541771b098511632bf16f3207681bdffd6065d3a5a] <==
	I1108 10:38:52.891317       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 10:38:52.936432       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 10:38:52.936585       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 10:38:52.939835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:38:56.394710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:00.655380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:04.253581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:07.307198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:10.330693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:10.339733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:39:10.339896       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 10:39:10.344122       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62d2386e-59b0-4bb3-9886-de4d8f35e247", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-291044_aff3036a-f182-48f7-9ca0-b1e7e39ad7cc became leader
	W1108 10:39:10.348955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:39:10.349193       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-291044_aff3036a-f182-48f7-9ca0-b1e7e39ad7cc!
	W1108 10:39:10.370770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 10:39:10.449742       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-291044_aff3036a-f182-48f7-9ca0-b1e7e39ad7cc!
	W1108 10:39:12.373673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:12.383770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:14.391143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 10:39:14.398788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d1dbd6cc1f1dc5794d5f0bdb0bec35359fb7abbfb462e4a28128e68598c92cad] <==
	I1108 10:38:22.480783       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 10:38:52.482916       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-291044 -n no-preload-291044
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-291044 -n no-preload-291044: exit status 2 (369.12769ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-291044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.93s)
E1108 10:44:56.492455 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:44:57.134438 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:44:58.416182 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:00.978474 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:06.100623 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:14.180540 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:16.342230 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:21.759251 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (260/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.63
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.64
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.65
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.12
27 TestAddons/Setup 172.76
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.8
48 TestAddons/StoppedEnableDisable 12.62
49 TestCertOptions 43.44
50 TestCertExpiration 259.82
52 TestForceSystemdFlag 38.9
53 TestForceSystemdEnv 39.76
58 TestErrorSpam/setup 31.62
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.2
61 TestErrorSpam/pause 6.06
62 TestErrorSpam/unpause 5.54
63 TestErrorSpam/stop 1.53
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 84.73
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 32.38
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.46
75 TestFunctional/serial/CacheCmd/cache/add_local 1.14
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
77 TestFunctional/serial/CacheCmd/cache/list 0.11
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 29.62
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.49
86 TestFunctional/serial/LogsFileCmd 1.46
87 TestFunctional/serial/InvalidService 4.26
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 10.67
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.04
98 TestFunctional/parallel/AddonsCmd 0.21
99 TestFunctional/parallel/PersistentVolumeClaim 27.25
101 TestFunctional/parallel/SSHCmd 0.69
102 TestFunctional/parallel/CpCmd 1.91
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.71
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 1.01
113 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
116 TestFunctional/parallel/Version/short 0.08
117 TestFunctional/parallel/Version/components 0.77
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.42
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
125 TestFunctional/parallel/ImageCommands/ImageBuild 4.57
126 TestFunctional/parallel/ImageCommands/Setup 0.65
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/MountCmd/any-port 7.1
144 TestFunctional/parallel/MountCmd/specific-port 2.41
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.77
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
148 TestFunctional/parallel/ProfileCmd/profile_list 0.42
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
150 TestFunctional/parallel/ServiceCmd/List 1.33
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.4
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 185.95
163 TestMultiControlPlane/serial/DeployApp 37.53
164 TestMultiControlPlane/serial/PingHostFromPods 1.53
165 TestMultiControlPlane/serial/AddWorkerNode 59.69
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.06
168 TestMultiControlPlane/serial/CopyFile 20.26
169 TestMultiControlPlane/serial/StopSecondaryNode 12.93
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
171 TestMultiControlPlane/serial/RestartSecondaryNode 22.75
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 128.59
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.83
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
176 TestMultiControlPlane/serial/StopCluster 36.04
177 TestMultiControlPlane/serial/RestartCluster 64.39
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
179 TestMultiControlPlane/serial/AddSecondaryNode 81.63
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 50.83
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.84
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 38.48
211 TestKicCustomNetwork/use_default_bridge_network 37.67
212 TestKicExistingNetwork 34.92
213 TestKicCustomSubnet 37.58
214 TestKicStaticIP 33.44
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 71.03
219 TestMountStart/serial/StartWithMountFirst 9.55
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 10.37
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 8.04
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 133.82
231 TestMultiNode/serial/DeployApp2Nodes 5.26
232 TestMultiNode/serial/PingHostFrom2Pods 0.9
233 TestMultiNode/serial/AddNode 59.54
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.68
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 8.66
239 TestMultiNode/serial/RestartKeepsNodes 72.07
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 24.13
242 TestMultiNode/serial/RestartMultiNode 57.93
243 TestMultiNode/serial/ValidateNameConflict 39.05
248 TestPreload 122.62
250 TestScheduledStopUnix 109.84
253 TestInsufficientStorage 11.07
254 TestRunningBinaryUpgrade 53.43
256 TestKubernetesUpgrade 364.68
257 TestMissingContainerUpgrade 119.3
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
260 TestNoKubernetes/serial/StartWithK8s 42.38
261 TestNoKubernetes/serial/StartWithStopK8s 7.9
262 TestNoKubernetes/serial/Start 8.04
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.43
264 TestNoKubernetes/serial/ProfileList 3.9
265 TestNoKubernetes/serial/Stop 1.33
266 TestNoKubernetes/serial/StartNoArgs 7.95
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
268 TestStoppedBinaryUpgrade/Setup 0.77
269 TestStoppedBinaryUpgrade/Upgrade 56.98
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
279 TestPause/serial/Start 82.29
280 TestPause/serial/SecondStartNoReconfiguration 29.96
289 TestNetworkPlugins/group/false 5.57
294 TestStartStop/group/old-k8s-version/serial/FirstStart 62.31
295 TestStartStop/group/old-k8s-version/serial/DeployApp 8.51
297 TestStartStop/group/old-k8s-version/serial/Stop 12.01
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/old-k8s-version/serial/SecondStart 50.95
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.64
307 TestStartStop/group/embed-certs/serial/FirstStart 85.83
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 12
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.52
313 TestStartStop/group/embed-certs/serial/DeployApp 8.36
315 TestStartStop/group/embed-certs/serial/Stop 12.03
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/embed-certs/serial/SecondStart 55.43
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
323 TestStartStop/group/no-preload/serial/FirstStart 71.78
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.15
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.4
329 TestStartStop/group/newest-cni/serial/FirstStart 36.91
330 TestStartStop/group/no-preload/serial/DeployApp 10.38
332 TestStartStop/group/no-preload/serial/Stop 12.1
333 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/Stop 1.32
336 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
337 TestStartStop/group/newest-cni/serial/SecondStart 19.36
338 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
339 TestStartStop/group/no-preload/serial/SecondStart 53.43
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
344 TestNetworkPlugins/group/auto/Start 86.08
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.13
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
349 TestNetworkPlugins/group/kindnet/Start 80.58
350 TestNetworkPlugins/group/auto/KubeletFlags 0.35
351 TestNetworkPlugins/group/auto/NetCatPod 10.39
352 TestNetworkPlugins/group/auto/DNS 0.17
353 TestNetworkPlugins/group/auto/Localhost 0.15
354 TestNetworkPlugins/group/auto/HairPin 0.15
355 TestNetworkPlugins/group/calico/Start 69.06
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.47
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.36
359 TestNetworkPlugins/group/kindnet/DNS 0.21
360 TestNetworkPlugins/group/kindnet/Localhost 0.19
361 TestNetworkPlugins/group/kindnet/HairPin 0.2
362 TestNetworkPlugins/group/custom-flannel/Start 70.5
363 TestNetworkPlugins/group/calico/ControllerPod 6.02
364 TestNetworkPlugins/group/calico/KubeletFlags 0.41
365 TestNetworkPlugins/group/calico/NetCatPod 13.34
366 TestNetworkPlugins/group/calico/DNS 0.24
367 TestNetworkPlugins/group/calico/Localhost 0.2
368 TestNetworkPlugins/group/calico/HairPin 0.17
369 TestNetworkPlugins/group/enable-default-cni/Start 80.37
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.35
372 TestNetworkPlugins/group/custom-flannel/DNS 0.16
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
375 TestNetworkPlugins/group/flannel/Start 59.96
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.37
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/bridge/Start 82.29
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
384 TestNetworkPlugins/group/flannel/NetCatPod 10.47
385 TestNetworkPlugins/group/flannel/DNS 0.22
386 TestNetworkPlugins/group/flannel/Localhost 0.16
387 TestNetworkPlugins/group/flannel/HairPin 0.15
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 9.26
390 TestNetworkPlugins/group/bridge/DNS 0.15
391 TestNetworkPlugins/group/bridge/Localhost 0.12
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (5.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-554144 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-554144 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.632879815s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1108 09:33:25.822932 1029234 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1108 09:33:25.823014 1029234 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-554144
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-554144: exit status 85 (90.420267ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-554144 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-554144 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:33:20
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:33:20.233201 1029239 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:33:20.233414 1029239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:20.233444 1029239 out.go:374] Setting ErrFile to fd 2...
	I1108 09:33:20.233466 1029239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:20.234277 1029239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	W1108 09:33:20.234488 1029239 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21865-1027379/.minikube/config/config.json: open /home/jenkins/minikube-integration/21865-1027379/.minikube/config/config.json: no such file or directory
	I1108 09:33:20.234944 1029239 out.go:368] Setting JSON to true
	I1108 09:33:20.235878 1029239 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29746,"bootTime":1762564655,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 09:33:20.235951 1029239 start.go:143] virtualization:  
	I1108 09:33:20.239812 1029239 out.go:99] [download-only-554144] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1108 09:33:20.240027 1029239 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball: no such file or directory
	I1108 09:33:20.240146 1029239 notify.go:221] Checking for updates...
	I1108 09:33:20.243961 1029239 out.go:171] MINIKUBE_LOCATION=21865
	I1108 09:33:20.247030 1029239 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:33:20.249978 1029239 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 09:33:20.252895 1029239 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 09:33:20.255971 1029239 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1108 09:33:20.261601 1029239 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 09:33:20.261891 1029239 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:33:20.292124 1029239 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:33:20.292231 1029239 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:33:20.356738 1029239 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-08 09:33:20.347113662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:33:20.356841 1029239 docker.go:319] overlay module found
	I1108 09:33:20.359902 1029239 out.go:99] Using the docker driver based on user configuration
	I1108 09:33:20.359944 1029239 start.go:309] selected driver: docker
	I1108 09:33:20.359952 1029239 start.go:930] validating driver "docker" against <nil>
	I1108 09:33:20.360054 1029239 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:33:20.417676 1029239 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-08 09:33:20.408566885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:33:20.417839 1029239 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:33:20.418109 1029239 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1108 09:33:20.418300 1029239 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 09:33:20.421448 1029239 out.go:171] Using Docker driver with root privileges
	I1108 09:33:20.424356 1029239 cni.go:84] Creating CNI manager for ""
	I1108 09:33:20.424471 1029239 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:33:20.424486 1029239 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:33:20.424572 1029239 start.go:353] cluster config:
	{Name:download-only-554144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-554144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:33:20.427504 1029239 out.go:99] Starting "download-only-554144" primary control-plane node in "download-only-554144" cluster
	I1108 09:33:20.427533 1029239 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:33:20.430544 1029239 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:33:20.430594 1029239 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 09:33:20.430693 1029239 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:33:20.448510 1029239 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:33:20.448762 1029239 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 09:33:20.448877 1029239 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:33:20.498529 1029239 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1108 09:33:20.498553 1029239 cache.go:59] Caching tarball of preloaded images
	I1108 09:33:20.498709 1029239 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 09:33:20.502148 1029239 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1108 09:33:20.502190 1029239 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1108 09:33:20.589264 1029239 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1108 09:33:20.589394 1029239 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-554144 host does not exist
	  To start a cluster, run: "minikube start -p download-only-554144"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-554144
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-504302 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-504302 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.637004649s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1108 09:33:30.923826 1029234 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1108 09:33:30.923870 1029234 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-1027379/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-504302
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-504302: exit status 85 (100.183027ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-554144 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-554144 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ delete  │ -p download-only-554144                                                                                                                                                   │ download-only-554144 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │ 08 Nov 25 09:33 UTC │
	│ start   │ -o=json --download-only -p download-only-504302 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-504302 │ jenkins │ v1.37.0 │ 08 Nov 25 09:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:33:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:33:26.330771 1029435 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:33:26.330956 1029435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:26.330968 1029435 out.go:374] Setting ErrFile to fd 2...
	I1108 09:33:26.330973 1029435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:26.331224 1029435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:33:26.331623 1029435 out.go:368] Setting JSON to true
	I1108 09:33:26.332410 1029435 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29752,"bootTime":1762564655,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 09:33:26.332505 1029435 start.go:143] virtualization:  
	I1108 09:33:26.335731 1029435 out.go:99] [download-only-504302] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 09:33:26.335915 1029435 notify.go:221] Checking for updates...
	I1108 09:33:26.338835 1029435 out.go:171] MINIKUBE_LOCATION=21865
	I1108 09:33:26.341728 1029435 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:33:26.344653 1029435 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 09:33:26.347561 1029435 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 09:33:26.350464 1029435 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1108 09:33:26.356174 1029435 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 09:33:26.356487 1029435 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:33:26.389283 1029435 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:33:26.389396 1029435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:33:26.449920 1029435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-08 09:33:26.439642858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:33:26.450031 1029435 docker.go:319] overlay module found
	I1108 09:33:26.453049 1029435 out.go:99] Using the docker driver based on user configuration
	I1108 09:33:26.453099 1029435 start.go:309] selected driver: docker
	I1108 09:33:26.453110 1029435 start.go:930] validating driver "docker" against <nil>
	I1108 09:33:26.453236 1029435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:33:26.514785 1029435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-08 09:33:26.505611503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:33:26.514945 1029435 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:33:26.515241 1029435 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1108 09:33:26.515400 1029435 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 09:33:26.518609 1029435 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-504302 host does not exist
	  To start a cluster, run: "minikube start -p download-only-504302"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-504302
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1108 09:33:32.089333 1029234 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-870798 --alsologtostderr --binary-mirror http://127.0.0.1:37897 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-870798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-870798
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-517137
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-517137: exit status 85 (98.51251ms)

                                                
                                                
-- stdout --
	* Profile "addons-517137" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-517137"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.12s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-517137
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-517137: exit status 85 (116.847281ms)

                                                
                                                
-- stdout --
	* Profile "addons-517137" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-517137"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.12s)

                                                
                                    
x
+
TestAddons/Setup (172.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-517137 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-517137 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m52.762214725s)
--- PASS: TestAddons/Setup (172.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-517137 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-517137 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.8s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-517137 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-517137 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6c3a29de-9cda-45ba-93b1-4af4480dc1a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6c3a29de-9cda-45ba-93b1-4af4480dc1a0] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004261192s
addons_test.go:694: (dbg) Run:  kubectl --context addons-517137 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-517137 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-517137 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-517137 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.62s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-517137
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-517137: (12.249775737s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-517137
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-517137
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-517137
--- PASS: TestAddons/StoppedEnableDisable (12.62s)

                                                
                                    
x
+
TestCertOptions (43.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-517657 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-517657 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (40.466272818s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-517657 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-517657 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-517657 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-517657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-517657
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-517657: (2.127427822s)
--- PASS: TestCertOptions (43.44s)

                                                
                                    
x
+
TestCertExpiration (259.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-837698 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-837698 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (46.665530298s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-837698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-837698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (30.628632211s)
helpers_test.go:175: Cleaning up "cert-expiration-837698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-837698
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-837698: (2.52321709s)
--- PASS: TestCertExpiration (259.82s)

                                                
                                    
x
+
TestForceSystemdFlag (38.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-845139 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-845139 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.557369489s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-845139 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-845139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-845139
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-845139: (2.956796339s)
--- PASS: TestForceSystemdFlag (38.90s)

                                                
                                    
x
+
TestForceSystemdEnv (39.76s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-680693 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-680693 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.071876484s)
helpers_test.go:175: Cleaning up "force-systemd-env-680693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-680693
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-680693: (2.689733747s)
--- PASS: TestForceSystemdEnv (39.76s)

                                                
                                    
x
+
TestErrorSpam/setup (31.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-077820 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-077820 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-077820 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-077820 --driver=docker  --container-runtime=crio: (31.621361333s)
--- PASS: TestErrorSpam/setup (31.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (6.06s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 pause: exit status 80 (1.81902539s)

                                                
                                                
-- stdout --
	* Pausing node nospam-077820 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:40:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 pause: exit status 80 (2.06760799s)

                                                
                                                
-- stdout --
	* Pausing node nospam-077820 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:40:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 pause: exit status 80 (2.169227685s)

                                                
                                                
-- stdout --
	* Pausing node nospam-077820 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:40:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.06s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 unpause: exit status 80 (1.616599002s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-077820 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:40:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 unpause: exit status 80 (1.923744645s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-077820 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:40:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 unpause: exit status 80 (1.999368442s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-077820 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:40:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.54s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 stop: (1.325459844s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-077820 --log_dir /tmp/nospam-077820 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21865-1027379/.minikube/files/etc/test/nested/copy/1029234/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386623 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1108 09:41:26.428691 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:26.435081 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:26.446572 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:26.468090 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:26.509598 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:26.591009 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:26.752520 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:27.074238 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:27.715715 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:28.997367 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:31.559151 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:36.680791 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:41:46.922710 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-386623 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m24.729869928s)
--- PASS: TestFunctional/serial/StartWithProxy (84.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1108 09:42:04.945658 1029234 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386623 --alsologtostderr -v=8
E1108 09:42:07.404108 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-386623 --alsologtostderr -v=8: (32.374909856s)
functional_test.go:678: soft start took 32.375398343s for "functional-386623" cluster.
I1108 09:42:37.320864 1029234 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (32.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-386623 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-386623 cache add registry.k8s.io/pause:3.1: (1.169824988s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-386623 cache add registry.k8s.io/pause:3.3: (1.18101338s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-386623 cache add registry.k8s.io/pause:latest: (1.108013819s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-386623 /tmp/TestFunctionalserialCacheCmdcacheadd_local4081614861/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 cache add minikube-local-cache-test:functional-386623
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 cache delete minikube-local-cache-test:functional-386623
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-386623
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.351507ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 kubectl -- --context functional-386623 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-386623 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386623 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1108 09:42:48.365499 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-386623 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.61566998s)
functional_test.go:776: restart took 29.61577502s for "functional-386623" cluster.
I1108 09:43:14.439163 1029234 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (29.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-386623 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-386623 logs: (1.493915509s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 logs --file /tmp/TestFunctionalserialLogsFileCmd1507929024/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-386623 logs --file /tmp/TestFunctionalserialLogsFileCmd1507929024/001/logs.txt: (1.463183232s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-386623 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-386623
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-386623: exit status 115 (379.909135ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31587 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-386623 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 config get cpus: exit status 14 (66.648045ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 config get cpus: exit status 14 (113.616238ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-386623 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-386623 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1056488: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386623 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-386623 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.53603ms)

                                                
                                                
-- stdout --
	* [functional-386623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:53:50.374911 1055976 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:53:50.375029 1055976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:50.375034 1055976 out.go:374] Setting ErrFile to fd 2...
	I1108 09:53:50.375046 1055976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:50.375434 1055976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:53:50.375909 1055976 out.go:368] Setting JSON to false
	I1108 09:53:50.377223 1055976 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30976,"bootTime":1762564655,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 09:53:50.377343 1055976 start.go:143] virtualization:  
	I1108 09:53:50.381024 1055976 out.go:179] * [functional-386623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 09:53:50.384040 1055976 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:53:50.384143 1055976 notify.go:221] Checking for updates...
	I1108 09:53:50.389684 1055976 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:53:50.394141 1055976 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 09:53:50.396994 1055976 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 09:53:50.399905 1055976 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 09:53:50.402786 1055976 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:53:50.406217 1055976 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:50.406786 1055976 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:53:50.435452 1055976 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:53:50.435559 1055976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:50.504295 1055976 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 09:53:50.488505703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:53:50.504402 1055976 docker.go:319] overlay module found
	I1108 09:53:50.507548 1055976 out.go:179] * Using the docker driver based on existing profile
	I1108 09:53:50.510349 1055976 start.go:309] selected driver: docker
	I1108 09:53:50.510373 1055976 start.go:930] validating driver "docker" against &{Name:functional-386623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-386623 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:53:50.510481 1055976 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:53:50.513943 1055976 out.go:203] 
	W1108 09:53:50.516831 1055976 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1108 09:53:50.519812 1055976 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386623 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386623 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-386623 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (189.606494ms)

                                                
                                                
-- stdout --
	* [functional-386623] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:53:51.865641 1056317 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:53:51.865788 1056317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:51.865818 1056317 out.go:374] Setting ErrFile to fd 2...
	I1108 09:53:51.865835 1056317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:51.866219 1056317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:53:51.866623 1056317 out.go:368] Setting JSON to false
	I1108 09:53:51.867533 1056317 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30977,"bootTime":1762564655,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 09:53:51.867603 1056317 start.go:143] virtualization:  
	I1108 09:53:51.873001 1056317 out.go:179] * [functional-386623] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1108 09:53:51.875787 1056317 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:53:51.875830 1056317 notify.go:221] Checking for updates...
	I1108 09:53:51.878611 1056317 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:53:51.881371 1056317 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 09:53:51.884302 1056317 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 09:53:51.887044 1056317 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 09:53:51.889986 1056317 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:53:51.893273 1056317 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:51.893892 1056317 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:53:51.923946 1056317 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 09:53:51.924054 1056317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:51.981445 1056317 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 09:53:51.971875739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:53:51.981560 1056317 docker.go:319] overlay module found
	I1108 09:53:51.984635 1056317 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1108 09:53:51.987517 1056317 start.go:309] selected driver: docker
	I1108 09:53:51.987534 1056317 start.go:930] validating driver "docker" against &{Name:functional-386623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-386623 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:53:51.987644 1056317 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:53:51.991023 1056317 out.go:203] 
	W1108 09:53:51.993764 1056317 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1108 09:53:51.996554 1056317 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [0e416398-1230-4cdd-b5c2-bd925a8d0ec6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003503308s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-386623 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-386623 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-386623 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-386623 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [21ed8f77-496b-4d29-bcab-c3084e088122] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [21ed8f77-496b-4d29-bcab-c3084e088122] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003444748s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-386623 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-386623 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-386623 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d34e040c-7a32-4d4d-8e08-6c8113227681] Pending
helpers_test.go:352: "sp-pod" [d34e040c-7a32-4d4d-8e08-6c8113227681] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.006725992s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-386623 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh -n functional-386623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 cp functional-386623:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd627051172/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh -n functional-386623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh -n functional-386623 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1029234/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "sudo cat /etc/test/nested/copy/1029234/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1029234.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "sudo cat /etc/ssl/certs/1029234.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1029234.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "sudo cat /usr/share/ca-certificates/1029234.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/10292342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "sudo cat /etc/ssl/certs/10292342.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/10292342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "sudo cat /usr/share/ca-certificates/10292342.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-386623 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 ssh "sudo systemctl is-active docker": exit status 1 (480.268082ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 ssh "sudo systemctl is-active containerd": exit status 1 (528.296546ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-386623 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-386623 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-386623 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-386623 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1051190: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-386623 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-386623 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [db75257c-4a82-4203-8362-bc83e28bf7db] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [db75257c-4a82-4203-8362-bc83e28bf7db] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004164987s
I1108 09:43:32.135056 1029234 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-386623 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-386623 image ls --format short --alsologtostderr:
I1108 09:54:03.579841 1056956 out.go:360] Setting OutFile to fd 1 ...
I1108 09:54:03.580055 1056956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:54:03.580121 1056956 out.go:374] Setting ErrFile to fd 2...
I1108 09:54:03.580144 1056956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:54:03.580491 1056956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
I1108 09:54:03.581220 1056956 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:54:03.581384 1056956 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:54:03.581894 1056956 cli_runner.go:164] Run: docker container inspect functional-386623 --format={{.State.Status}}
I1108 09:54:03.602956 1056956 ssh_runner.go:195] Run: systemctl --version
I1108 09:54:03.603008 1056956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
I1108 09:54:03.622429 1056956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
I1108 09:54:03.731319 1056956 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-386623 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/library/nginx                 │ latest             │ 2d5a8f08b76da │ 176MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-386623  │ f2030b02ee4d7 │ 1.64MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-386623 image ls --format table --alsologtostderr:
I1108 09:54:08.685524 1058110 out.go:360] Setting OutFile to fd 1 ...
I1108 09:54:08.685732 1058110 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:54:08.685746 1058110 out.go:374] Setting ErrFile to fd 2...
I1108 09:54:08.685752 1058110 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:54:08.686064 1058110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
I1108 09:54:08.686900 1058110 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:54:08.687059 1058110 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:54:08.687624 1058110 cli_runner.go:164] Run: docker container inspect functional-386623 --format={{.State.Status}}
I1108 09:54:08.710350 1058110 ssh_runner.go:195] Run: systemctl --version
I1108 09:54:08.710400 1058110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
I1108 09:54:08.731587 1058110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
I1108 09:54:08.843354 1058110 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-386623 image ls --format json --alsologtostderr:
[{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1b7f2a8bfa16feaa20ed7dc5473ceafc53d17d6f81f10c6a9621df89983db494","repoDigests":["docker.io/library/dc4a33cde5a863162449db90759a560fa97a5bd3944e6df7757b0bbc85194217-tmp@sha256:310ed761c8f64462034af8c5b040920c8926ad09969364f2f24d73c521c2cef8"],"repoTags":[],"siz
e":"1638178"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821
953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"
247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37
e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"2d5a8f08b76da55a3731f09e696a0ee5c6d8115ba5e80c5ae2ae1c210b3b1b98","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33"],"repoTags":["docker.io/library/nginx:latest"],"size":"176006678"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/b
usybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"f2030b02ee4d786fedd3b12c56fc1eba788885b34a98c313e8f8a5b93af7ca92","repoDigests":["localhost/my-image@sha256:6738bc2da0d5fde84dfc95c5f501662b1aaa1d1c7fbd16c0f1a303fd9193f0a2"],"repoTags":["localhost/my-image:functional-386623"],"size":"1640791"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055
d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-386623 image ls --format json --alsologtostderr:
I1108 09:54:08.646081 1058103 out.go:360] Setting OutFile to fd 1 ...
I1108 09:54:08.646251 1058103 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:54:08.646276 1058103 out.go:374] Setting ErrFile to fd 2...
I1108 09:54:08.646297 1058103 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:54:08.646617 1058103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
I1108 09:54:08.647330 1058103 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:54:08.647507 1058103 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:54:08.647981 1058103 cli_runner.go:164] Run: docker container inspect functional-386623 --format={{.State.Status}}
I1108 09:54:08.683833 1058103 ssh_runner.go:195] Run: systemctl --version
I1108 09:54:08.683889 1058103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
I1108 09:54:08.705221 1058103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
I1108 09:54:08.810590 1058103 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-386623 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 2d5a8f08b76da55a3731f09e696a0ee5c6d8115ba5e80c5ae2ae1c210b3b1b98
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33
repoTags:
- docker.io/library/nginx:latest
size: "176006678"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-386623 image ls --format yaml --alsologtostderr:
I1108 09:54:03.825117 1056995 out.go:360] Setting OutFile to fd 1 ...
I1108 09:54:03.825314 1056995 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:54:03.825344 1056995 out.go:374] Setting ErrFile to fd 2...
I1108 09:54:03.825368 1056995 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:54:03.825687 1056995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
I1108 09:54:03.826408 1056995 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:54:03.826586 1056995 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:54:03.827111 1056995 cli_runner.go:164] Run: docker container inspect functional-386623 --format={{.State.Status}}
I1108 09:54:03.845554 1056995 ssh_runner.go:195] Run: systemctl --version
I1108 09:54:03.845608 1056995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
I1108 09:54:03.863740 1056995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
I1108 09:54:03.968641 1056995 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 ssh pgrep buildkitd: exit status 1 (378.519157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image build -t localhost/my-image:functional-386623 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-386623 image build -t localhost/my-image:functional-386623 testdata/build --alsologtostderr: (3.836166194s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-386623 image build -t localhost/my-image:functional-386623 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1b7f2a8bfa1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-386623
--> f2030b02ee4
Successfully tagged localhost/my-image:functional-386623
f2030b02ee4d786fedd3b12c56fc1eba788885b34a98c313e8f8a5b93af7ca92
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-386623 image build -t localhost/my-image:functional-386623 testdata/build --alsologtostderr:
I1108 09:54:04.515110 1057169 out.go:360] Setting OutFile to fd 1 ...
I1108 09:54:04.520589 1057169 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:54:04.520603 1057169 out.go:374] Setting ErrFile to fd 2...
I1108 09:54:04.520608 1057169 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:54:04.520899 1057169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
I1108 09:54:04.521569 1057169 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:54:04.522095 1057169 config.go:182] Loaded profile config "functional-386623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:54:04.522650 1057169 cli_runner.go:164] Run: docker container inspect functional-386623 --format={{.State.Status}}
I1108 09:54:04.551806 1057169 ssh_runner.go:195] Run: systemctl --version
I1108 09:54:04.551881 1057169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386623
I1108 09:54:04.573655 1057169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34235 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/functional-386623/id_rsa Username:docker}
I1108 09:54:04.679879 1057169 build_images.go:162] Building image from path: /tmp/build.1421515461.tar
I1108 09:54:04.679967 1057169 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1108 09:54:04.688897 1057169 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1421515461.tar
I1108 09:54:04.693724 1057169 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1421515461.tar: stat -c "%s %y" /var/lib/minikube/build/build.1421515461.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1421515461.tar': No such file or directory
I1108 09:54:04.693754 1057169 ssh_runner.go:362] scp /tmp/build.1421515461.tar --> /var/lib/minikube/build/build.1421515461.tar (3072 bytes)
I1108 09:54:04.721381 1057169 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1421515461
I1108 09:54:04.730080 1057169 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1421515461 -xf /var/lib/minikube/build/build.1421515461.tar
I1108 09:54:04.739474 1057169 crio.go:315] Building image: /var/lib/minikube/build/build.1421515461
I1108 09:54:04.739558 1057169 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-386623 /var/lib/minikube/build/build.1421515461 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1108 09:54:08.211528 1057169 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-386623 /var/lib/minikube/build/build.1421515461 --cgroup-manager=cgroupfs: (3.471943849s)
I1108 09:54:08.211599 1057169 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1421515461
I1108 09:54:08.219984 1057169 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1421515461.tar
I1108 09:54:08.232384 1057169 build_images.go:218] Built localhost/my-image:functional-386623 from /tmp/build.1421515461.tar
I1108 09:54:08.232413 1057169 build_images.go:134] succeeded building to: functional-386623
I1108 09:54:08.232418 1057169 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-386623
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image rm kicbase/echo-server:functional-386623 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-386623 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.250.128 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-386623 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-386623 /tmp/TestFunctionalparallelMountCmdany-port3657448542/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1762595013058879670" to /tmp/TestFunctionalparallelMountCmdany-port3657448542/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1762595013058879670" to /tmp/TestFunctionalparallelMountCmdany-port3657448542/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1762595013058879670" to /tmp/TestFunctionalparallelMountCmdany-port3657448542/001/test-1762595013058879670
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (387.03167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 09:43:33.446163 1029234 retry.go:31] will retry after 465.327264ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  8 09:43 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  8 09:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  8 09:43 test-1762595013058879670
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh cat /mount-9p/test-1762595013058879670
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-386623 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [bee00ea7-89bf-4c8b-9585-d7f01fec8418] Pending
helpers_test.go:352: "busybox-mount" [bee00ea7-89bf-4c8b-9585-d7f01fec8418] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [bee00ea7-89bf-4c8b-9585-d7f01fec8418] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [bee00ea7-89bf-4c8b-9585-d7f01fec8418] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006098896s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-386623 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386623 /tmp/TestFunctionalparallelMountCmdany-port3657448542/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-386623 /tmp/TestFunctionalparallelMountCmdspecific-port1273805518/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (413.903337ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 09:43:40.568733 1029234 retry.go:31] will retry after 587.941715ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386623 /tmp/TestFunctionalparallelMountCmdspecific-port1273805518/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 ssh "sudo umount -f /mount-9p": exit status 1 (441.046994ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-386623 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386623 /tmp/TestFunctionalparallelMountCmdspecific-port1273805518/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-386623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1378303404/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-386623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1378303404/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-386623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1378303404/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386623 ssh "findmnt -T" /mount1: exit status 1 (1.051928495s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 09:43:43.620878 1029234 retry.go:31] will retry after 533.443922ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-386623 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1378303404/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1378303404/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386623 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1378303404/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "364.112092ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.751465ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "382.45366ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.787107ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 service list
2025/11/08 09:54:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-386623 service list: (1.325445523s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-386623 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-386623 service list -o json: (1.395504777s)
functional_test.go:1504: Took "1.395585209s" to run "out/minikube-linux-arm64 -p functional-386623 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-386623
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-386623
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-386623
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (185.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1108 09:56:26.426343 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m5.052332491s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (185.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (37.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- rollout status deployment/busybox
E1108 09:57:49.492698 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 kubectl -- rollout status deployment/busybox: (34.805631289s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-25zgh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-pnvxz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-vvbwj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-25zgh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-pnvxz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-vvbwj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-25zgh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-pnvxz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-vvbwj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (37.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-25zgh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-25zgh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-pnvxz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-pnvxz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-vvbwj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 kubectl -- exec busybox-7b57f96db7-vvbwj -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 node add --alsologtostderr -v 5
E1108 09:58:22.712163 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:22.718590 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:22.729929 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:22.751368 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:22.792768 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:22.874308 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:23.035890 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:23.357332 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:23.999331 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:25.281301 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:27.842755 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:32.964095 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:58:43.206099 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 node add --alsologtostderr -v 5: (58.62659836s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5: (1.058572733s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-503681 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.060804673s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 status --output json --alsologtostderr -v 5: (1.04708674s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp testdata/cp-test.txt ha-503681:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1465350978/001/cp-test_ha-503681.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681:/home/docker/cp-test.txt ha-503681-m02:/home/docker/cp-test_ha-503681_ha-503681-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m02 "sudo cat /home/docker/cp-test_ha-503681_ha-503681-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681:/home/docker/cp-test.txt ha-503681-m03:/home/docker/cp-test_ha-503681_ha-503681-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m03 "sudo cat /home/docker/cp-test_ha-503681_ha-503681-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681:/home/docker/cp-test.txt ha-503681-m04:/home/docker/cp-test_ha-503681_ha-503681-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m04 "sudo cat /home/docker/cp-test_ha-503681_ha-503681-m04.txt"
E1108 09:59:03.687488 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp testdata/cp-test.txt ha-503681-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1465350978/001/cp-test_ha-503681-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m02:/home/docker/cp-test.txt ha-503681:/home/docker/cp-test_ha-503681-m02_ha-503681.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681 "sudo cat /home/docker/cp-test_ha-503681-m02_ha-503681.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m02:/home/docker/cp-test.txt ha-503681-m03:/home/docker/cp-test_ha-503681-m02_ha-503681-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m03 "sudo cat /home/docker/cp-test_ha-503681-m02_ha-503681-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m02:/home/docker/cp-test.txt ha-503681-m04:/home/docker/cp-test_ha-503681-m02_ha-503681-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m04 "sudo cat /home/docker/cp-test_ha-503681-m02_ha-503681-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp testdata/cp-test.txt ha-503681-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1465350978/001/cp-test_ha-503681-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m03:/home/docker/cp-test.txt ha-503681:/home/docker/cp-test_ha-503681-m03_ha-503681.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681 "sudo cat /home/docker/cp-test_ha-503681-m03_ha-503681.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m03:/home/docker/cp-test.txt ha-503681-m02:/home/docker/cp-test_ha-503681-m03_ha-503681-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m02 "sudo cat /home/docker/cp-test_ha-503681-m03_ha-503681-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m03:/home/docker/cp-test.txt ha-503681-m04:/home/docker/cp-test_ha-503681-m03_ha-503681-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m04 "sudo cat /home/docker/cp-test_ha-503681-m03_ha-503681-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp testdata/cp-test.txt ha-503681-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1465350978/001/cp-test_ha-503681-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m04:/home/docker/cp-test.txt ha-503681:/home/docker/cp-test_ha-503681-m04_ha-503681.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681 "sudo cat /home/docker/cp-test_ha-503681-m04_ha-503681.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m04:/home/docker/cp-test.txt ha-503681-m02:/home/docker/cp-test_ha-503681-m04_ha-503681-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m02 "sudo cat /home/docker/cp-test_ha-503681-m04_ha-503681-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 cp ha-503681-m04:/home/docker/cp-test.txt ha-503681-m03:/home/docker/cp-test_ha-503681-m04_ha-503681-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 ssh -n ha-503681-m03 "sudo cat /home/docker/cp-test_ha-503681-m04_ha-503681-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 node stop m02 --alsologtostderr -v 5: (12.16052428s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5: exit status 7 (771.806666ms)

                                                
                                                
-- stdout --
	ha-503681
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503681-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-503681-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-503681-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:59:30.397418 1072768 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:59:30.397610 1072768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:59:30.397667 1072768 out.go:374] Setting ErrFile to fd 2...
	I1108 09:59:30.397687 1072768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:59:30.398048 1072768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 09:59:30.398271 1072768 out.go:368] Setting JSON to false
	I1108 09:59:30.398337 1072768 mustload.go:66] Loading cluster: ha-503681
	I1108 09:59:30.398397 1072768 notify.go:221] Checking for updates...
	I1108 09:59:30.399752 1072768 config.go:182] Loaded profile config "ha-503681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:59:30.399796 1072768 status.go:174] checking status of ha-503681 ...
	I1108 09:59:30.400600 1072768 cli_runner.go:164] Run: docker container inspect ha-503681 --format={{.State.Status}}
	I1108 09:59:30.419622 1072768 status.go:371] ha-503681 host status = "Running" (err=<nil>)
	I1108 09:59:30.419668 1072768 host.go:66] Checking if "ha-503681" exists ...
	I1108 09:59:30.419965 1072768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-503681
	I1108 09:59:30.452952 1072768 host.go:66] Checking if "ha-503681" exists ...
	I1108 09:59:30.453280 1072768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:59:30.453336 1072768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-503681
	I1108 09:59:30.472268 1072768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/ha-503681/id_rsa Username:docker}
	I1108 09:59:30.577788 1072768 ssh_runner.go:195] Run: systemctl --version
	I1108 09:59:30.585902 1072768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:59:30.599201 1072768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:59:30.660003 1072768 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-08 09:59:30.649670414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 09:59:30.660756 1072768 kubeconfig.go:125] found "ha-503681" server: "https://192.168.49.254:8443"
	I1108 09:59:30.660791 1072768 api_server.go:166] Checking apiserver status ...
	I1108 09:59:30.660837 1072768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:59:30.672213 1072768 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1240/cgroup
	I1108 09:59:30.681321 1072768 api_server.go:182] apiserver freezer: "3:freezer:/docker/e5b71001c3da00136bfd3f97d98139ee4e3296b77b2beeecc9102ce55f6bacbe/crio/crio-0ea5ca71e47105ed2226e6685ff98892376bc1d712b60030b829d7c5e4681f2f"
	I1108 09:59:30.681389 1072768 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e5b71001c3da00136bfd3f97d98139ee4e3296b77b2beeecc9102ce55f6bacbe/crio/crio-0ea5ca71e47105ed2226e6685ff98892376bc1d712b60030b829d7c5e4681f2f/freezer.state
	I1108 09:59:30.689251 1072768 api_server.go:204] freezer state: "THAWED"
	I1108 09:59:30.689284 1072768 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1108 09:59:30.697474 1072768 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1108 09:59:30.697506 1072768 status.go:463] ha-503681 apiserver status = Running (err=<nil>)
	I1108 09:59:30.697519 1072768 status.go:176] ha-503681 status: &{Name:ha-503681 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:59:30.697549 1072768 status.go:174] checking status of ha-503681-m02 ...
	I1108 09:59:30.697869 1072768 cli_runner.go:164] Run: docker container inspect ha-503681-m02 --format={{.State.Status}}
	I1108 09:59:30.715548 1072768 status.go:371] ha-503681-m02 host status = "Stopped" (err=<nil>)
	I1108 09:59:30.715577 1072768 status.go:384] host is not running, skipping remaining checks
	I1108 09:59:30.715584 1072768 status.go:176] ha-503681-m02 status: &{Name:ha-503681-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:59:30.715605 1072768 status.go:174] checking status of ha-503681-m03 ...
	I1108 09:59:30.715925 1072768 cli_runner.go:164] Run: docker container inspect ha-503681-m03 --format={{.State.Status}}
	I1108 09:59:30.735115 1072768 status.go:371] ha-503681-m03 host status = "Running" (err=<nil>)
	I1108 09:59:30.735142 1072768 host.go:66] Checking if "ha-503681-m03" exists ...
	I1108 09:59:30.735443 1072768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-503681-m03
	I1108 09:59:30.752761 1072768 host.go:66] Checking if "ha-503681-m03" exists ...
	I1108 09:59:30.753100 1072768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:59:30.753156 1072768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-503681-m03
	I1108 09:59:30.771477 1072768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34250 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/ha-503681-m03/id_rsa Username:docker}
	I1108 09:59:30.886378 1072768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:59:30.900488 1072768 kubeconfig.go:125] found "ha-503681" server: "https://192.168.49.254:8443"
	I1108 09:59:30.900516 1072768 api_server.go:166] Checking apiserver status ...
	I1108 09:59:30.900557 1072768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:59:30.913130 1072768 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	I1108 09:59:30.922283 1072768 api_server.go:182] apiserver freezer: "3:freezer:/docker/7c73746dbaeae03dc6f0924dcf802903285c97380082b607ac33f12e6a140ead/crio/crio-07b055df77e31d3caa097479c92a1fa0b7a87426ba8a822ae847ae1699eae34c"
	I1108 09:59:30.922366 1072768 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7c73746dbaeae03dc6f0924dcf802903285c97380082b607ac33f12e6a140ead/crio/crio-07b055df77e31d3caa097479c92a1fa0b7a87426ba8a822ae847ae1699eae34c/freezer.state
	I1108 09:59:30.930310 1072768 api_server.go:204] freezer state: "THAWED"
	I1108 09:59:30.930342 1072768 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1108 09:59:30.938864 1072768 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1108 09:59:30.938895 1072768 status.go:463] ha-503681-m03 apiserver status = Running (err=<nil>)
	I1108 09:59:30.938906 1072768 status.go:176] ha-503681-m03 status: &{Name:ha-503681-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:59:30.938923 1072768 status.go:174] checking status of ha-503681-m04 ...
	I1108 09:59:30.939234 1072768 cli_runner.go:164] Run: docker container inspect ha-503681-m04 --format={{.State.Status}}
	I1108 09:59:30.957600 1072768 status.go:371] ha-503681-m04 host status = "Running" (err=<nil>)
	I1108 09:59:30.957626 1072768 host.go:66] Checking if "ha-503681-m04" exists ...
	I1108 09:59:30.957955 1072768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-503681-m04
	I1108 09:59:30.975288 1072768 host.go:66] Checking if "ha-503681-m04" exists ...
	I1108 09:59:30.975585 1072768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:59:30.975636 1072768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-503681-m04
	I1108 09:59:30.993183 1072768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/ha-503681-m04/id_rsa Username:docker}
	I1108 09:59:31.098256 1072768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:59:31.114436 1072768 status.go:176] ha-503681-m04 status: &{Name:ha-503681-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 node start m02 --alsologtostderr -v 5
E1108 09:59:44.648951 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 node start m02 --alsologtostderr -v 5: (21.496127034s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5: (1.156632361s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.019463458s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 stop --alsologtostderr -v 5: (26.663006411s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 start --wait true --alsologtostderr -v 5
E1108 10:01:06.571324 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:01:26.426588 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 start --wait true --alsologtostderr -v 5: (1m41.761270024s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 node delete m03 --alsologtostderr -v 5: (10.850460858s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 stop --alsologtostderr -v 5: (35.92627442s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5: exit status 7 (116.130179ms)

                                                
                                                
-- stdout --
	ha-503681
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-503681-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-503681-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:02:52.913694 1084467 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:02:52.913897 1084467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:02:52.913925 1084467 out.go:374] Setting ErrFile to fd 2...
	I1108 10:02:52.913947 1084467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:02:52.914236 1084467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:02:52.914463 1084467 out.go:368] Setting JSON to false
	I1108 10:02:52.914513 1084467 mustload.go:66] Loading cluster: ha-503681
	I1108 10:02:52.915029 1084467 config.go:182] Loaded profile config "ha-503681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:02:52.915082 1084467 status.go:174] checking status of ha-503681 ...
	I1108 10:02:52.914576 1084467 notify.go:221] Checking for updates...
	I1108 10:02:52.916138 1084467 cli_runner.go:164] Run: docker container inspect ha-503681 --format={{.State.Status}}
	I1108 10:02:52.935927 1084467 status.go:371] ha-503681 host status = "Stopped" (err=<nil>)
	I1108 10:02:52.935947 1084467 status.go:384] host is not running, skipping remaining checks
	I1108 10:02:52.935954 1084467 status.go:176] ha-503681 status: &{Name:ha-503681 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 10:02:52.935983 1084467 status.go:174] checking status of ha-503681-m02 ...
	I1108 10:02:52.936321 1084467 cli_runner.go:164] Run: docker container inspect ha-503681-m02 --format={{.State.Status}}
	I1108 10:02:52.958079 1084467 status.go:371] ha-503681-m02 host status = "Stopped" (err=<nil>)
	I1108 10:02:52.958099 1084467 status.go:384] host is not running, skipping remaining checks
	I1108 10:02:52.958114 1084467 status.go:176] ha-503681-m02 status: &{Name:ha-503681-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 10:02:52.958134 1084467 status.go:174] checking status of ha-503681-m04 ...
	I1108 10:02:52.958430 1084467 cli_runner.go:164] Run: docker container inspect ha-503681-m04 --format={{.State.Status}}
	I1108 10:02:52.980554 1084467 status.go:371] ha-503681-m04 host status = "Stopped" (err=<nil>)
	I1108 10:02:52.980581 1084467 status.go:384] host is not running, skipping remaining checks
	I1108 10:02:52.980588 1084467 status.go:176] ha-503681-m04 status: &{Name:ha-503681-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (64.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1108 10:03:22.716917 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:03:50.413317 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m3.320921296s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (64.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 node add --control-plane --alsologtostderr -v 5: (1m20.564823783s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-503681 status --alsologtostderr -v 5: (1.06859953s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.078476704s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-266002 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-266002 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (50.818391716s)
--- PASS: TestJSONOutput/start/Command (50.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-266002 --output=json --user=testUser
E1108 10:06:26.426472 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-266002 --output=json --user=testUser: (5.842299511s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-164061 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-164061 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.02464ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9256a289-bb23-458c-990e-8adbd50eee5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-164061] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fea6394-fe11-4931-a943-742252651f05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21865"}}
	{"specversion":"1.0","id":"2e1af4bf-72bf-4298-88d2-15021c352ad8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80645175-68b0-493e-83bb-d8503c9549fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig"}}
	{"specversion":"1.0","id":"61e255c2-953a-4b9f-ab47-0b4eaa59dbcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube"}}
	{"specversion":"1.0","id":"4545bdf7-7859-4f4e-a130-edd775329367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"fdf8d244-6404-4909-bf82-1618d57a4d14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a24d549-5067-427b-885f-aae47adc1aea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-164061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-164061
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-083237 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-083237 --network=: (36.214814689s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-083237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-083237
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-083237: (2.236268918s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.48s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-352011 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-352011 --network=bridge: (35.47202301s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-352011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-352011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-352011: (2.176800681s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.67s)

                                                
                                    
x
+
TestKicExistingNetwork (34.92s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1108 10:07:50.250182 1029234 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1108 10:07:50.267368 1029234 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1108 10:07:50.268574 1029234 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1108 10:07:50.268625 1029234 cli_runner.go:164] Run: docker network inspect existing-network
W1108 10:07:50.285637 1029234 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1108 10:07:50.285671 1029234 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1108 10:07:50.285688 1029234 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1108 10:07:50.285805 1029234 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1108 10:07:50.305742 1029234 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f127b1978c3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:c7:37:65:8c:96} reservation:<nil>}
I1108 10:07:50.306126 1029234 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c85f60}
I1108 10:07:50.306149 1029234 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1108 10:07:50.306215 1029234 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1108 10:07:50.372600 1029234 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-627292 --network=existing-network
E1108 10:08:22.712075 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-627292 --network=existing-network: (32.716249535s)
helpers_test.go:175: Cleaning up "existing-network-627292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-627292
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-627292: (2.047259757s)
I1108 10:08:25.153967 1029234 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.92s)

                                                
                                    
x
+
TestKicCustomSubnet (37.58s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-095220 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-095220 --subnet=192.168.60.0/24: (35.381798093s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-095220 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-095220" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-095220
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-095220: (2.159158433s)
--- PASS: TestKicCustomSubnet (37.58s)

                                                
                                    
x
+
TestKicStaticIP (33.44s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-546130 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-546130 --static-ip=192.168.200.200: (31.001680506s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-546130 ip
helpers_test.go:175: Cleaning up "static-ip-546130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-546130
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-546130: (2.275361185s)
--- PASS: TestKicStaticIP (33.44s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-719982 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-719982 --driver=docker  --container-runtime=crio: (30.283934738s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-725668 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-725668 --driver=docker  --container-runtime=crio: (34.839032596s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-719982
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-725668
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-725668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-725668
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-725668: (2.200071574s)
helpers_test.go:175: Cleaning up "first-719982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-719982
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-719982: (2.118061774s)
--- PASS: TestMinikubeProfile (71.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-088603 --memory=3072 --mount-string /tmp/TestMountStartserial1982024518/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-088603 --memory=3072 --mount-string /tmp/TestMountStartserial1982024518/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.544946793s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-088603 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-091056 --memory=3072 --mount-string /tmp/TestMountStartserial1982024518/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-091056 --memory=3072 --mount-string /tmp/TestMountStartserial1982024518/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.369466635s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-091056 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-088603 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-088603 --alsologtostderr -v=5: (1.715515453s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-091056 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-091056
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-091056: (1.282578627s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-091056
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-091056: (7.039134559s)
--- PASS: TestMountStart/serial/RestartStopped (8.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-091056 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-666487 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1108 10:11:26.428198 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:13:22.712578 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-666487 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m13.264639144s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-666487 -- rollout status deployment/busybox: (3.463373875s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- exec busybox-7b57f96db7-628ft -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- exec busybox-7b57f96db7-x4d2q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- exec busybox-7b57f96db7-628ft -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- exec busybox-7b57f96db7-x4d2q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- exec busybox-7b57f96db7-628ft -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- exec busybox-7b57f96db7-x4d2q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- exec busybox-7b57f96db7-628ft -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- exec busybox-7b57f96db7-628ft -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- exec busybox-7b57f96db7-x4d2q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-666487 -- exec busybox-7b57f96db7-x4d2q -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-666487 -v=5 --alsologtostderr
E1108 10:14:29.494193 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-666487 -v=5 --alsologtostderr: (58.797832681s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.54s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-666487 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp testdata/cp-test.txt multinode-666487:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp multinode-666487:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile620553008/001/cp-test_multinode-666487.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp multinode-666487:/home/docker/cp-test.txt multinode-666487-m02:/home/docker/cp-test_multinode-666487_multinode-666487-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m02 "sudo cat /home/docker/cp-test_multinode-666487_multinode-666487-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp multinode-666487:/home/docker/cp-test.txt multinode-666487-m03:/home/docker/cp-test_multinode-666487_multinode-666487-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m03 "sudo cat /home/docker/cp-test_multinode-666487_multinode-666487-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp testdata/cp-test.txt multinode-666487-m02:/home/docker/cp-test.txt
E1108 10:14:45.776185 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp multinode-666487-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile620553008/001/cp-test_multinode-666487-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp multinode-666487-m02:/home/docker/cp-test.txt multinode-666487:/home/docker/cp-test_multinode-666487-m02_multinode-666487.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487 "sudo cat /home/docker/cp-test_multinode-666487-m02_multinode-666487.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp multinode-666487-m02:/home/docker/cp-test.txt multinode-666487-m03:/home/docker/cp-test_multinode-666487-m02_multinode-666487-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m03 "sudo cat /home/docker/cp-test_multinode-666487-m02_multinode-666487-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp testdata/cp-test.txt multinode-666487-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp multinode-666487-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile620553008/001/cp-test_multinode-666487-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp multinode-666487-m03:/home/docker/cp-test.txt multinode-666487:/home/docker/cp-test_multinode-666487-m03_multinode-666487.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487 "sudo cat /home/docker/cp-test_multinode-666487-m03_multinode-666487.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 cp multinode-666487-m03:/home/docker/cp-test.txt multinode-666487-m02:/home/docker/cp-test_multinode-666487-m03_multinode-666487-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 ssh -n multinode-666487-m02 "sudo cat /home/docker/cp-test_multinode-666487-m03_multinode-666487-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-666487 node stop m03: (1.332077877s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-666487 status: exit status 7 (538.162061ms)

                                                
                                                
-- stdout --
	multinode-666487
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-666487-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-666487-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-666487 status --alsologtostderr: exit status 7 (523.769705ms)

                                                
                                                
-- stdout --
	multinode-666487
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-666487-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-666487-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:14:54.171231 1134748 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:14:54.171422 1134748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:14:54.171446 1134748 out.go:374] Setting ErrFile to fd 2...
	I1108 10:14:54.171502 1134748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:14:54.171896 1134748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:14:54.172154 1134748 out.go:368] Setting JSON to false
	I1108 10:14:54.172200 1134748 mustload.go:66] Loading cluster: multinode-666487
	I1108 10:14:54.172751 1134748 config.go:182] Loaded profile config "multinode-666487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:14:54.173102 1134748 status.go:174] checking status of multinode-666487 ...
	I1108 10:14:54.173065 1134748 notify.go:221] Checking for updates...
	I1108 10:14:54.174933 1134748 cli_runner.go:164] Run: docker container inspect multinode-666487 --format={{.State.Status}}
	I1108 10:14:54.193364 1134748 status.go:371] multinode-666487 host status = "Running" (err=<nil>)
	I1108 10:14:54.193384 1134748 host.go:66] Checking if "multinode-666487" exists ...
	I1108 10:14:54.193691 1134748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-666487
	I1108 10:14:54.216906 1134748 host.go:66] Checking if "multinode-666487" exists ...
	I1108 10:14:54.217318 1134748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:14:54.217404 1134748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-666487
	I1108 10:14:54.236262 1134748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34360 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/multinode-666487/id_rsa Username:docker}
	I1108 10:14:54.337782 1134748 ssh_runner.go:195] Run: systemctl --version
	I1108 10:14:54.344021 1134748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:14:54.357539 1134748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:14:54.411858 1134748 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-08 10:14:54.401695443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:14:54.412403 1134748 kubeconfig.go:125] found "multinode-666487" server: "https://192.168.67.2:8443"
	I1108 10:14:54.412514 1134748 api_server.go:166] Checking apiserver status ...
	I1108 10:14:54.412561 1134748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 10:14:54.427091 1134748 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup
	I1108 10:14:54.435535 1134748 api_server.go:182] apiserver freezer: "3:freezer:/docker/4ce288fe2db0743c6bfe44aed980f738c31511d857f9da8b0cc8fb5fdc276048/crio/crio-185eba316004a5a5e5f14c8d1aa1ae9eb366c639b7d17265328daf117231fa3b"
	I1108 10:14:54.435603 1134748 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4ce288fe2db0743c6bfe44aed980f738c31511d857f9da8b0cc8fb5fdc276048/crio/crio-185eba316004a5a5e5f14c8d1aa1ae9eb366c639b7d17265328daf117231fa3b/freezer.state
	I1108 10:14:54.443151 1134748 api_server.go:204] freezer state: "THAWED"
	I1108 10:14:54.443182 1134748 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1108 10:14:54.451249 1134748 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1108 10:14:54.451277 1134748 status.go:463] multinode-666487 apiserver status = Running (err=<nil>)
	I1108 10:14:54.451289 1134748 status.go:176] multinode-666487 status: &{Name:multinode-666487 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 10:14:54.451307 1134748 status.go:174] checking status of multinode-666487-m02 ...
	I1108 10:14:54.451618 1134748 cli_runner.go:164] Run: docker container inspect multinode-666487-m02 --format={{.State.Status}}
	I1108 10:14:54.468308 1134748 status.go:371] multinode-666487-m02 host status = "Running" (err=<nil>)
	I1108 10:14:54.468333 1134748 host.go:66] Checking if "multinode-666487-m02" exists ...
	I1108 10:14:54.468719 1134748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-666487-m02
	I1108 10:14:54.484795 1134748 host.go:66] Checking if "multinode-666487-m02" exists ...
	I1108 10:14:54.485109 1134748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 10:14:54.485155 1134748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-666487-m02
	I1108 10:14:54.501678 1134748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34365 SSHKeyPath:/home/jenkins/minikube-integration/21865-1027379/.minikube/machines/multinode-666487-m02/id_rsa Username:docker}
	I1108 10:14:54.606047 1134748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 10:14:54.618357 1134748 status.go:176] multinode-666487-m02 status: &{Name:multinode-666487-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1108 10:14:54.618389 1134748 status.go:174] checking status of multinode-666487-m03 ...
	I1108 10:14:54.618709 1134748 cli_runner.go:164] Run: docker container inspect multinode-666487-m03 --format={{.State.Status}}
	I1108 10:14:54.635107 1134748 status.go:371] multinode-666487-m03 host status = "Stopped" (err=<nil>)
	I1108 10:14:54.635128 1134748 status.go:384] host is not running, skipping remaining checks
	I1108 10:14:54.635135 1134748 status.go:176] multinode-666487-m03 status: &{Name:multinode-666487-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-666487 node start m03 -v=5 --alsologtostderr: (7.845562939s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-666487
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-666487
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-666487: (25.131742561s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-666487 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-666487 --wait=true -v=5 --alsologtostderr: (46.809471065s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-666487
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-666487 node delete m03: (4.984510707s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 stop
E1108 10:16:26.426376 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-666487 stop: (23.807871998s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-666487 status: exit status 7 (89.425773ms)

                                                
                                                
-- stdout --
	multinode-666487
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-666487-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-666487 status --alsologtostderr: exit status 7 (227.362415ms)

                                                
                                                
-- stdout --
	multinode-666487
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-666487-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:16:44.986047 1142508 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:16:44.986174 1142508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:16:44.986185 1142508 out.go:374] Setting ErrFile to fd 2...
	I1108 10:16:44.986190 1142508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:16:44.986545 1142508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:16:44.986767 1142508 out.go:368] Setting JSON to false
	I1108 10:16:44.986797 1142508 mustload.go:66] Loading cluster: multinode-666487
	I1108 10:16:44.987449 1142508 config.go:182] Loaded profile config "multinode-666487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:16:44.987466 1142508 status.go:174] checking status of multinode-666487 ...
	I1108 10:16:44.988189 1142508 cli_runner.go:164] Run: docker container inspect multinode-666487 --format={{.State.Status}}
	I1108 10:16:44.988545 1142508 notify.go:221] Checking for updates...
	I1108 10:16:45.011505 1142508 status.go:371] multinode-666487 host status = "Stopped" (err=<nil>)
	I1108 10:16:45.011534 1142508 status.go:384] host is not running, skipping remaining checks
	I1108 10:16:45.011543 1142508 status.go:176] multinode-666487 status: &{Name:multinode-666487 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 10:16:45.011576 1142508 status.go:174] checking status of multinode-666487-m02 ...
	I1108 10:16:45.011919 1142508 cli_runner.go:164] Run: docker container inspect multinode-666487-m02 --format={{.State.Status}}
	I1108 10:16:45.144722 1142508 status.go:371] multinode-666487-m02 host status = "Stopped" (err=<nil>)
	I1108 10:16:45.144748 1142508 status.go:384] host is not running, skipping remaining checks
	I1108 10:16:45.144756 1142508 status.go:176] multinode-666487-m02 status: &{Name:multinode-666487-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-666487 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-666487 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (57.171467721s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-666487 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-666487
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-666487-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-666487-m02 --driver=docker  --container-runtime=crio: exit status 14 (99.318613ms)

                                                
                                                
-- stdout --
	* [multinode-666487-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-666487-m02' is duplicated with machine name 'multinode-666487-m02' in profile 'multinode-666487'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-666487-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-666487-m03 --driver=docker  --container-runtime=crio: (36.521561922s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-666487
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-666487: exit status 80 (335.246728ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-666487 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-666487-m03 already exists in multinode-666487-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-666487-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-666487-m03: (2.036934474s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.05s)

                                                
                                    
x
+
TestPreload (122.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-679173 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-679173 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (59.044141211s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-679173 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-679173 image pull gcr.io/k8s-minikube/busybox: (2.143547084s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-679173
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-679173: (5.871608904s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-679173 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-679173 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (52.886757463s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-679173 image list
helpers_test.go:175: Cleaning up "test-preload-679173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-679173
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-679173: (2.442253847s)
--- PASS: TestPreload (122.62s)

                                                
                                    
x
+
TestScheduledStopUnix (109.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-666335 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-666335 --memory=3072 --driver=docker  --container-runtime=crio: (34.010124274s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-666335 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-666335 -n scheduled-stop-666335
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-666335 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1108 10:21:03.631576 1029234 retry.go:31] will retry after 141.892µs: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.635027 1029234 retry.go:31] will retry after 141.979µs: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.635451 1029234 retry.go:31] will retry after 144.144µs: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.635773 1029234 retry.go:31] will retry after 389.081µs: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.636564 1029234 retry.go:31] will retry after 558.116µs: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.637699 1029234 retry.go:31] will retry after 636.579µs: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.638808 1029234 retry.go:31] will retry after 1.205297ms: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.640990 1029234 retry.go:31] will retry after 2.267574ms: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.644184 1029234 retry.go:31] will retry after 2.017479ms: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.646340 1029234 retry.go:31] will retry after 4.97739ms: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.651553 1029234 retry.go:31] will retry after 4.29037ms: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.656767 1029234 retry.go:31] will retry after 5.584251ms: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.663029 1029234 retry.go:31] will retry after 13.848725ms: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.677275 1029234 retry.go:31] will retry after 14.279249ms: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.692561 1029234 retry.go:31] will retry after 34.461133ms: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
I1108 10:21:03.728157 1029234 retry.go:31] will retry after 43.528717ms: open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/scheduled-stop-666335/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-666335 --cancel-scheduled
E1108 10:21:26.427382 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-666335 -n scheduled-stop-666335
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-666335
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-666335 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-666335
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-666335: exit status 7 (73.80512ms)

                                                
                                                
-- stdout --
	scheduled-stop-666335
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-666335 -n scheduled-stop-666335
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-666335 -n scheduled-stop-666335: exit status 7 (68.319229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-666335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-666335
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-666335: (4.201995077s)
--- PASS: TestScheduledStopUnix (109.84s)

                                                
                                    
x
+
TestInsufficientStorage (11.07s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-292724 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-292724 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.494999107s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d59f6e3b-d49f-42bd-a432-c7724378ec29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-292724] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"073c86c7-5713-4968-bfee-81196e270873","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21865"}}
	{"specversion":"1.0","id":"9a2050dd-52c8-419e-8f21-8736a0c8cb74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a5f5c2ab-f293-430d-9562-15aac842d4fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig"}}
	{"specversion":"1.0","id":"449a14ad-f560-4334-b853-0285ce799fe9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube"}}
	{"specversion":"1.0","id":"31f53241-50c3-4537-a7c5-51542a9add6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"03d07bfc-2204-4f94-9640-f3ff8dc39eee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"84ae0415-a929-4a40-bc2e-086472d55f4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"988e12af-3112-461b-9ac0-0ae7d71d75a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f5563071-2627-480a-92be-4e4cab2a90e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b980109-1298-4bc9-99a9-77f1d6cc6f42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"42097d47-e0d1-434b-8e3f-63867dd81e29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-292724\" primary control-plane node in \"insufficient-storage-292724\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9e72a02-4f2e-4c80-a947-095a8c337f19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e028fda5-473d-4797-b362-601299e5530e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1b791f5-a95f-4c9e-aa53-798a3102b128","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-292724 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-292724 --output=json --layout=cluster: exit status 7 (299.005785ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-292724","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-292724","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 10:22:27.694749 1158693 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-292724" does not appear in /home/jenkins/minikube-integration/21865-1027379/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-292724 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-292724 --output=json --layout=cluster: exit status 7 (303.850594ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-292724","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-292724","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 10:22:27.996606 1158761 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-292724" does not appear in /home/jenkins/minikube-integration/21865-1027379/kubeconfig
	E1108 10:22:28.007926 1158761 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/insufficient-storage-292724/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-292724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-292724
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-292724: (1.966647531s)
--- PASS: TestInsufficientStorage (11.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1740176078 start -p running-upgrade-980073 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1740176078 start -p running-upgrade-980073 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.375539292s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-980073 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-980073 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.388555783s)
helpers_test.go:175: Cleaning up "running-upgrade-980073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-980073
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-980073: (1.965223941s)
--- PASS: TestRunningBinaryUpgrade (53.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (364.68s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.193413278s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-666491
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-666491: (1.442316741s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-666491 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-666491 status --format={{.Host}}: exit status 7 (98.303135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.235711463s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-666491 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (117.501354ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-666491] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-666491
	    minikube start -p kubernetes-upgrade-666491 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6664912 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-666491 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-666491 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.760421664s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-666491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-666491
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-666491: (2.69648143s)
--- PASS: TestKubernetesUpgrade (364.68s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3892857694 start -p missing-upgrade-625347 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3892857694 start -p missing-upgrade-625347 --memory=3072 --driver=docker  --container-runtime=crio: (1m6.005344663s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-625347
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-625347
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-625347 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-625347 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.542025507s)
helpers_test.go:175: Cleaning up "missing-upgrade-625347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-625347
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-625347: (2.196661665s)
--- PASS: TestMissingContainerUpgrade (119.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-012922 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-012922 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (123.948979ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-012922] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-012922 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-012922 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.940393582s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-012922 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-012922 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-012922 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.382762276s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-012922 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-012922 status -o json: exit status 2 (386.871343ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-012922","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-012922
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-012922: (2.130409366s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-012922 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1108 10:23:22.712378 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-012922 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.043619187s)
--- PASS: TestNoKubernetes/serial/Start (8.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-012922 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-012922 "sudo systemctl is-active --quiet service kubelet": exit status 1 (428.233915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-arm64 profile list: (3.329659007s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-012922
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-012922: (1.327833953s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-012922 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-012922 --driver=docker  --container-runtime=crio: (7.946053022s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-012922 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-012922 "sudo systemctl is-active --quiet service kubelet": exit status 1 (374.406747ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (56.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2910336032 start -p stopped-upgrade-660964 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2910336032 start -p stopped-upgrade-660964 --memory=3072 --vm-driver=docker  --container-runtime=crio: (37.493428409s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2910336032 -p stopped-upgrade-660964 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2910336032 -p stopped-upgrade-660964 stop: (1.230459469s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-660964 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-660964 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.258813491s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (56.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-660964
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-660964: (1.149351958s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/Start (82.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-343192 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1108 10:26:26.426406 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-343192 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.286127674s)
--- PASS: TestPause/serial/Start (82.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-343192 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-343192 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.949628962s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-731120 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-731120 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (254.471018ms)

                                                
                                                
-- stdout --
	* [false-731120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 10:29:09.403523 1195748 out.go:360] Setting OutFile to fd 1 ...
	I1108 10:29:09.403651 1195748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:29:09.403659 1195748 out.go:374] Setting ErrFile to fd 2...
	I1108 10:29:09.403663 1195748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 10:29:09.403938 1195748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-1027379/.minikube/bin
	I1108 10:29:09.404403 1195748 out.go:368] Setting JSON to false
	I1108 10:29:09.405302 1195748 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33095,"bootTime":1762564655,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1108 10:29:09.405374 1195748 start.go:143] virtualization:  
	I1108 10:29:09.408735 1195748 out.go:179] * [false-731120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1108 10:29:09.412548 1195748 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 10:29:09.413456 1195748 notify.go:221] Checking for updates...
	I1108 10:29:09.418590 1195748 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 10:29:09.422774 1195748 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-1027379/kubeconfig
	I1108 10:29:09.425610 1195748 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-1027379/.minikube
	I1108 10:29:09.428528 1195748 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1108 10:29:09.431343 1195748 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 10:29:09.434917 1195748 config.go:182] Loaded profile config "kubernetes-upgrade-666491": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 10:29:09.435031 1195748 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 10:29:09.485616 1195748 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1108 10:29:09.485743 1195748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 10:29:09.579254 1195748 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-08 10:29:09.569802209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1108 10:29:09.579357 1195748 docker.go:319] overlay module found
	I1108 10:29:09.582416 1195748 out.go:179] * Using the docker driver based on user configuration
	I1108 10:29:09.585219 1195748 start.go:309] selected driver: docker
	I1108 10:29:09.585242 1195748 start.go:930] validating driver "docker" against <nil>
	I1108 10:29:09.585256 1195748 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 10:29:09.588795 1195748 out.go:203] 
	W1108 10:29:09.591733 1195748 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1108 10:29:09.594596 1195748 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-731120 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-731120" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-731120" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 10:29:08 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-666491
contexts:
- context:
cluster: kubernetes-upgrade-666491
extensions:
- extension:
last-update: Sat, 08 Nov 2025 10:29:08 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-666491
name: kubernetes-upgrade-666491
current-context: kubernetes-upgrade-666491
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-666491
user:
client-certificate: /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kubernetes-upgrade-666491/client.crt
client-key: /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kubernetes-upgrade-666491/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-731120

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-731120"

                                                
                                                
----------------------- debugLogs end: false-731120 [took: 5.062066183s] --------------------------------
helpers_test.go:175: Cleaning up "false-731120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-731120
--- PASS: TestNetworkPlugins/group/false (5.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1108 10:31:09.498319 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:31:25.778149 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:31:26.426861 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.309815697s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-171136 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bb27a248-1db0-4b58-a6df-586ba5fd017f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bb27a248-1db0-4b58-a6df-586ba5fd017f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003937777s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-171136 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-171136 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-171136 --alsologtostderr -v=3: (12.013237125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-171136 -n old-k8s-version-171136
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-171136 -n old-k8s-version-171136: exit status 7 (79.91016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-171136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-171136 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.55559604s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-171136 -n old-k8s-version-171136
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-k8zsb" [16871c16-e616-4ff3-8dfa-809dcd2a3b26] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004143s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-k8zsb" [16871c16-e616-4ff3-8dfa-809dcd2a3b26] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003398462s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-171136 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-171136 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 10:33:22.712732 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.642892473s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.830099536s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-236075 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f71e109f-b88f-4781-b4ac-aaabd22ff178] Pending
helpers_test.go:352: "busybox" [f71e109f-b88f-4781-b4ac-aaabd22ff178] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f71e109f-b88f-4781-b4ac-aaabd22ff178] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004225068s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-236075 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-236075 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-236075 --alsologtostderr -v=3: (11.998494454s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075: exit status 7 (75.321597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-236075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-236075 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.04154026s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-236075 -n default-k8s-diff-port-236075
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-790346 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [85b2c572-22bf-44ec-98e1-3e867fa1882e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [85b2c572-22bf-44ec-98e1-3e867fa1882e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003960324s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-790346 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-790346 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-790346 --alsologtostderr -v=3: (12.027627179s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790346 -n embed-certs-790346
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790346 -n embed-certs-790346: exit status 7 (74.900703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-790346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-790346 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.009283251s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790346 -n embed-certs-790346
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9bgcn" [24830468-2da1-4071-a4ca-9add3a940f75] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002690284s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9bgcn" [24830468-2da1-4071-a4ca-9add3a940f75] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003691477s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-236075 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-236075 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 10:36:26.426369 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:36:45.537260 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:36:45.543720 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:36:45.555073 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:36:45.576445 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:36:45.617723 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:36:45.699118 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:36:45.860596 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:36:46.182210 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:36:46.824261 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:36:48.105579 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:36:50.666829 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m11.778231172s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xxk4p" [93803341-192d-4b90-b40b-724ae93d83cf] Running
E1108 10:36:55.788155 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003493176s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xxk4p" [93803341-192d-4b90-b40b-724ae93d83cf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00396886s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-790346 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-790346 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 10:37:26.515057 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (36.912846799s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-291044 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [19b26969-1d1f-4969-bd57-67043e5a7c30] Pending
helpers_test.go:352: "busybox" [19b26969-1d1f-4969-bd57-67043e5a7c30] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00387067s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-291044 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-291044 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-291044 --alsologtostderr -v=3: (12.103763147s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-515571 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-515571 --alsologtostderr -v=3: (1.31907013s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-515571 -n newest-cni-515571
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-515571 -n newest-cni-515571: exit status 7 (67.263762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-515571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-515571 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (18.765804738s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-515571 -n newest-cni-515571
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-291044 -n no-preload-291044
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-291044 -n no-preload-291044: exit status 7 (101.54676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-291044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 10:38:07.477959 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-291044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.985728246s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-291044 -n no-preload-291044
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-515571 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.074946908s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rttff" [a722ea55-9e8c-4c23-aa7f-ad48c06d67ec] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003887969s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rttff" [a722ea55-9e8c-4c23-aa7f-ad48c06d67ec] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002942422s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-291044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-291044 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1108 10:39:29.399228 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:39:46.479569 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:39:46.485858 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:39:46.497198 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:39:46.518551 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:39:46.559849 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:39:46.641182 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:39:46.802596 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:39:47.124323 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:39:47.765619 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:39:49.047753 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:39:51.609687 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m20.578499953s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-731120 "pgrep -a kubelet"
I1108 10:39:55.499895 1029234 config.go:182] Loaded profile config "auto-731120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-731120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-slnbd" [a3cee0de-5b13-4699-8365-32bc9b1a7a81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 10:39:56.730983 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-slnbd" [a3cee0de-5b13-4699-8365-32bc9b1a7a81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003936869s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-731120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1108 10:40:27.454376 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m9.063400182s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-qknsd" [e17fc5b0-09fe-45ab-b89c-ce1174a0d0e4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004849581s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-731120 "pgrep -a kubelet"
I1108 10:40:46.324883 1029234 config.go:182] Loaded profile config "kindnet-731120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-731120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hkfp9" [eaf4318f-adf5-4fe0-aece-fe6a044800d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hkfp9" [eaf4318f-adf5-4fe0-aece-fe6a044800d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003592729s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-731120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1108 10:41:26.426819 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/addons-517137/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m10.50211969s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-vwj9s" [de299ca1-5e27-4e91-9231-1f208277983b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-vwj9s" [de299ca1-5e27-4e91-9231-1f208277983b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.021569103s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-731120 "pgrep -a kubelet"
I1108 10:41:42.728002 1029234 config.go:182] Loaded profile config "calico-731120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-731120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zsppp" [ffe0b432-4878-4dbc-828b-d3d3554d8ef2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 10:41:45.537118 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/old-k8s-version-171136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-zsppp" [ffe0b432-4878-4dbc-828b-d3d3554d8ef2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.00398787s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-731120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1108 10:42:30.338347 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/default-k8s-diff-port-236075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m20.367133415s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-731120 "pgrep -a kubelet"
I1108 10:42:35.804234 1029234 config.go:182] Loaded profile config "custom-flannel-731120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-731120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sgvk6" [4e4c3a06-6ab1-482b-bbe5-d30de318f2a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 10:42:37.896462 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:42:37.902753 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:42:37.914038 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:42:37.935843 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:42:37.977173 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:42:38.058747 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:42:38.221661 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:42:38.543878 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:42:39.185570 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:42:40.468150 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-sgvk6" [4e4c3a06-6ab1-482b-bbe5-d30de318f2a3] Running
E1108 10:42:43.030203 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003726345s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-731120 exec deployment/netcat -- nslookup kubernetes.default
E1108 10:42:48.152090 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1108 10:43:18.875299 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/no-preload-291044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:43:22.712675 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/functional-386623/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.962849727s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-731120 "pgrep -a kubelet"
I1108 10:43:42.787902 1029234 config.go:182] Loaded profile config "enable-default-cni-731120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-731120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4rd58" [fa6ef337-2b23-47dc-add0-556d40c2db4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4rd58" [fa6ef337-2b23-47dc-add0-556d40c2db4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004506714s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-731120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-m77vb" [82a9daf0-9926-4c36-94bf-a35d502927b1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003618031s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-731120 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m22.288846943s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-731120 "pgrep -a kubelet"
I1108 10:44:18.918038 1029234 config.go:182] Loaded profile config "flannel-731120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-731120 replace --force -f testdata/netcat-deployment.yaml
I1108 10:44:19.305536 1029234 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xnxk8" [206297ba-bbd6-437f-aa07-582c6e1fd9f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xnxk8" [206297ba-bbd6-437f-aa07-582c6e1fd9f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00471963s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-731120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-731120 "pgrep -a kubelet"
E1108 10:45:36.823967 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/auto-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1108 10:45:36.840251 1029234 config.go:182] Loaded profile config "bridge-731120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-731120 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4h6pn" [1b7e7410-26c8-4c08-894c-048e5bd812cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 10:45:39.851154 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kindnet-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:39.857828 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kindnet-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:39.869558 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kindnet-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:39.891343 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kindnet-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:39.933018 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kindnet-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:40.016380 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kindnet-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:40.178716 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kindnet-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:40.500624 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kindnet-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-4h6pn" [1b7e7410-26c8-4c08-894c-048e5bd812cf] Running
E1108 10:45:41.142787 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kindnet-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:42.424348 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kindnet-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 10:45:44.985707 1029234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kindnet-731120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00416467s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-731120 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-731120 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-871809 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-871809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-871809
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-553553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-553553
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-731120 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-731120" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-731120" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 10:24:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-666491
contexts:
- context:
cluster: kubernetes-upgrade-666491
user: kubernetes-upgrade-666491
name: kubernetes-upgrade-666491
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-666491
user:
client-certificate: /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kubernetes-upgrade-666491/client.crt
client-key: /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kubernetes-upgrade-666491/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-731120

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-731120"

                                                
                                                
----------------------- debugLogs end: kubenet-731120 [took: 5.029608362s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-731120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-731120
--- SKIP: TestNetworkPlugins/group/kubenet (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-731120 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-731120" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21865-1027379/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 10:29:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-666491
contexts:
- context:
cluster: kubernetes-upgrade-666491
extensions:
- extension:
last-update: Sat, 08 Nov 2025 10:29:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-666491
name: kubernetes-upgrade-666491
current-context: kubernetes-upgrade-666491
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-666491
user:
client-certificate: /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kubernetes-upgrade-666491/client.crt
client-key: /home/jenkins/minikube-integration/21865-1027379/.minikube/profiles/kubernetes-upgrade-666491/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-731120

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-731120" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-731120"

                                                
                                                
----------------------- debugLogs end: cilium-731120 [took: 4.627581537s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-731120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-731120
--- SKIP: TestNetworkPlugins/group/cilium (4.78s)

                                                
                                    
Copied to clipboard